Abstract We explore the possibility of combining a knowledge-based reduced order model (ROM) with a reservoir computing approach to learn and predict the dynamics of chaotic systems. The ROM is based on proper orthogonal decomposition (POD) with Galerkin projection to capture the essential dynamics of the chaotic system while the reservoir computing approach used is based on echo state networks (ESNs). Two different hybrid approaches are explored: one where the ESN corrects the modal coefficients of the ROM (hybrid-ESN-A) and one where the ESN uses and corrects the ROM prediction in full state space (hybrid-ESN-B). These approaches are applied on two chaotic systems: the Charney–DeVore system and the Kuramoto–Sivashinsky equation and are compared to the ROM obtained using POD/Galerkin projection and to the data-only approach based uniquely on the ESN. The hybrid-ESN-B approach is seen to provide the best prediction accuracy, outperforming the other hybrid approach, the POD/Galerkin projection ROM, and the data-only ESN, especially when using ESNs with a small number of neurons. In addition, the influence of the accuracy of the ROM on the overall prediction accuracy of the hybrid-ESN-B is assessed rigorously by considering ROMs composed of different numbers of POD modes. Further analysis on how hybrid-ESN-B blends the prediction from the ROM and the ESN to predict the evolution of the system is also provided.
{"title":"Chaotic systems learning with hybrid echo state network/proper orthogonal decomposition based model","authors":"Mathias Lesjak, N. Doan","doi":"10.1017/dce.2021.17","DOIUrl":"https://doi.org/10.1017/dce.2021.17","url":null,"abstract":"Abstract We explore the possibility of combining a knowledge-based reduced order model (ROM) with a reservoir computing approach to learn and predict the dynamics of chaotic systems. The ROM is based on proper orthogonal decomposition (POD) with Galerkin projection to capture the essential dynamics of the chaotic system while the reservoir computing approach used is based on echo state networks (ESNs). Two different hybrid approaches are explored: one where the ESN corrects the modal coefficients of the ROM (hybrid-ESN-A) and one where the ESN uses and corrects the ROM prediction in full state space (hybrid-ESN-B). These approaches are applied on two chaotic systems: the Charney–DeVore system and the Kuramoto–Sivashinsky equation and are compared to the ROM obtained using POD/Galerkin projection and to the data-only approach based uniquely on the ESN. The hybrid-ESN-B approach is seen to provide the best prediction accuracy, outperforming the other hybrid approach, the POD/Galerkin projection ROM, and the data-only ESN, especially when using ESNs with a small number of neurons. In addition, the influence of the accuracy of the ROM on the overall prediction accuracy of the hybrid-ESN-B is assessed rigorously by considering ROMs composed of different numbers of POD modes. Further analysis on how hybrid-ESN-B blends the prediction from the ROM and the ESN to predict the evolution of the system is also provided.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44690113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We introduce an approach for damage detection in gearboxes based on the analysis of sensor data with the multi-resolution dynamic mode decomposition (mrDMD). The application focus is the condition monitoring of wind turbine gearboxes under varying load conditions, in particular irregular and stochastic wind fluctuations. We analyze data stemming from a simulated vibration response of a simple nonlinear gearbox model in a healthy and damaged scenario and under different wind conditions. With mrDMD applied on time-delay snapshots of the sensor data, we can extract components in these vibration signals that highlight features related to damage and enable its identification. A comparison with Fourier analysis, time synchronous averaging, and empirical mode decomposition shows the advantages of the proposed mrDMD-based data analysis approach for damage detection.
{"title":"Multi-resolution dynamic mode decomposition for damage detection in wind turbine gearboxes","authors":"Paolo Climaco, J. Garcke, Rodrigo Iza-Teran","doi":"10.1017/dce.2022.34","DOIUrl":"https://doi.org/10.1017/dce.2022.34","url":null,"abstract":"Abstract We introduce an approach for damage detection in gearboxes based on the analysis of sensor data with the multi-resolution dynamic mode decomposition (mrDMD). The application focus is the condition monitoring of wind turbine gearboxes under varying load conditions, in particular irregular and stochastic wind fluctuations. We analyze data stemming from a simulated vibration response of a simple nonlinear gearbox model in a healthy and damaged scenario and under different wind conditions. With mrDMD applied on time-delay snapshots of the sensor data, we can extract components in these vibration signals that highlight features related to damage and enable its identification. A comparison with Fourier analysis, time synchronous averaging, and empirical mode decomposition shows the advantages of the proposed mrDMD-based data analysis approach for damage detection.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42371806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The resolution of the Poisson equation is usually one of the most computationally intensive steps for incompressible fluid solvers. Lately, DeepLearning, and especially convolutional neural networks (CNNs), has been introduced to solve this equation, leading to significant inference time reduction at the cost of a lack of guarantee on the accuracy of the solution.This drawback might lead to inaccuracies, potentially unstable simulations and prevent performing fair assessments of the CNN speedup for different network architectures. To circumvent this issue, a hybrid strategy is developed, which couples a CNN with a traditional iterative solver to ensure a user-defined accuracy level. The CNN hybrid method is tested on two flow cases: (a) the flow around a 2D cylinder and (b) the variable-density plumes with and without obstacles (both 2D and 3D), demonstrating remarkable generalization capabilities, ensuring both the accuracy and stability of the simulations. The error distribution of the predictions using several network architectures is further investigated in the plume test case. The introduced hybrid strategy allows a systematic evaluation of the CNN performance at the same accuracy level for various network architectures. In particular, the importance of incorporating multiple scales in the network architecture is demonstrated, since improving both the accuracy and the inference performance compared with feedforward CNN architectures. Thus, in addition to the pure networks’ performance evaluation, this study has also led to numerous guidelines and results on how to build neural networks and computational strategies to predict unsteady flows with both accuracy and stability requirements.
{"title":"Performance and accuracy assessments of an incompressible fluid solver coupled with a deep convolutional neural network","authors":"Ekhi Ajuria Illarramendi, M. Bauerheim, B. Cuenot","doi":"10.1017/dce.2022.2","DOIUrl":"https://doi.org/10.1017/dce.2022.2","url":null,"abstract":"Abstract The resolution of the Poisson equation is usually one of the most computationally intensive steps for incompressible fluid solvers. Lately, DeepLearning, and especially convolutional neural networks (CNNs), has been introduced to solve this equation, leading to significant inference time reduction at the cost of a lack of guarantee on the accuracy of the solution.This drawback might lead to inaccuracies, potentially unstable simulations and prevent performing fair assessments of the CNN speedup for different network architectures. To circumvent this issue, a hybrid strategy is developed, which couples a CNN with a traditional iterative solver to ensure a user-defined accuracy level. The CNN hybrid method is tested on two flow cases: (a) the flow around a 2D cylinder and (b) the variable-density plumes with and without obstacles (both 2D and 3D), demonstrating remarkable generalization capabilities, ensuring both the accuracy and stability of the simulations. The error distribution of the predictions using several network architectures is further investigated in the plume test case. The introduced hybrid strategy allows a systematic evaluation of the CNN performance at the same accuracy level for various network architectures. In particular, the importance of incorporating multiple scales in the network architecture is demonstrated, since improving both the accuracy and the inference performance compared with feedforward CNN architectures. Thus, in addition to the pure networks’ performance evaluation, this study has also led to numerous guidelines and results on how to build neural networks and computational strategies to predict unsteady flows with both accuracy and stability requirements.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43104613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper introduces a dynamic knowledge-graph approach for digital twins and illustrates how this approach is by design naturally suited to realizing the vision of a Universal Digital Twin. The dynamic knowledge graph is implemented using technologies from the Semantic Web. It is composed of concepts and instances that are defined using ontologies, and of computational agents that operate on both the concepts and instances to update the dynamic knowledge graph. By construction, it is distributed, supports cross-domain interoperability, and ensures that data are connected, portable, discoverable, and queryable via a uniform interface. The knowledge graph includes the notions of a “base world” that describes the real world and that is maintained by agents that incorporate real-time data, and of “parallel worlds” that support the intelligent exploration of alternative designs without affecting the base world. Use cases are presented that demonstrate the ability of the dynamic knowledge graph to host geospatial and chemical data, control chemistry experiments, perform cross-domain simulations, and perform scenario analysis. The questions of how to make intelligent suggestions for alternative scenarios and how to ensure alignment between the scenarios considered by the knowledge graph and the goals of society are considered. Work to extend the dynamic knowledge graph to develop a digital twin of the UK to support the decarbonization of the energy system is discussed. Important directions for future research are highlighted.
{"title":"Universal Digital Twin - A Dynamic Knowledge Graph","authors":"J. Akroyd, S. Mosbach, A. Bhave, M. Kraft","doi":"10.1017/dce.2021.10","DOIUrl":"https://doi.org/10.1017/dce.2021.10","url":null,"abstract":"Abstract This paper introduces a dynamic knowledge-graph approach for digital twins and illustrates how this approach is by design naturally suited to realizing the vision of a Universal Digital Twin. The dynamic knowledge graph is implemented using technologies from the Semantic Web. It is composed of concepts and instances that are defined using ontologies, and of computational agents that operate on both the concepts and instances to update the dynamic knowledge graph. By construction, it is distributed, supports cross-domain interoperability, and ensures that data are connected, portable, discoverable, and queryable via a uniform interface. The knowledge graph includes the notions of a “base world” that describes the real world and that is maintained by agents that incorporate real-time data, and of “parallel worlds” that support the intelligent exploration of alternative designs without affecting the base world. Use cases are presented that demonstrate the ability of the dynamic knowledge graph to host geospatial and chemical data, control chemistry experiments, perform cross-domain simulations, and perform scenario analysis. The questions of how to make intelligent suggestions for alternative scenarios and how to ensure alignment between the scenarios considered by the knowledge graph and the goals of society are considered. Work to extend the dynamic knowledge graph to develop a digital twin of the UK to support the decarbonization of the energy system is discussed. Important directions for future research are highlighted.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43575559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Svalova, P. Helm, D. Prangle, M. Rouainia, S. Glendinning, D. Wilkinson
Abstract We propose using fully Bayesian Gaussian process emulation (GPE) as a surrogate for expensive computer experiments of transport infrastructure cut slopes in high-plasticity clay soils that are associated with an increased risk of failure. Our deterioration experiments simulate the dissipation of excess pore water pressure and seasonal pore water pressure cycles to determine slope failure time. It is impractical to perform the number of computer simulations that would be sufficient to make slope stability predictions over a meaningful range of geometries and strength parameters. Therefore, a GPE is used as an interpolator over a set of optimally spaced simulator runs modeling the time to slope failure as a function of geometry, strength, and permeability. Bayesian inference and Markov chain Monte Carlo simulation are used to obtain posterior estimates of the GPE parameters. For the experiments that do not reach failure within model time of 184 years, the time to failure is stochastically imputed by the Bayesian model. The trained GPE has the potential to inform infrastructure slope design, management, and maintenance. The reduction in computational cost compared with the original simulator makes it a highly attractive tool which can be applied to the different spatio-temporal scales of transport networks.
{"title":"Emulating computer experiments of transport infrastructure slope stability using Gaussian processes and Bayesian inference","authors":"A. Svalova, P. Helm, D. Prangle, M. Rouainia, S. Glendinning, D. Wilkinson","doi":"10.1017/dce.2021.14","DOIUrl":"https://doi.org/10.1017/dce.2021.14","url":null,"abstract":"Abstract We propose using fully Bayesian Gaussian process emulation (GPE) as a surrogate for expensive computer experiments of transport infrastructure cut slopes in high-plasticity clay soils that are associated with an increased risk of failure. Our deterioration experiments simulate the dissipation of excess pore water pressure and seasonal pore water pressure cycles to determine slope failure time. It is impractical to perform the number of computer simulations that would be sufficient to make slope stability predictions over a meaningful range of geometries and strength parameters. Therefore, a GPE is used as an interpolator over a set of optimally spaced simulator runs modeling the time to slope failure as a function of geometry, strength, and permeability. Bayesian inference and Markov chain Monte Carlo simulation are used to obtain posterior estimates of the GPE parameters. For the experiments that do not reach failure within model time of 184 years, the time to failure is stochastically imputed by the Bayesian model. The trained GPE has the potential to inform infrastructure slope design, management, and maintenance. The reduction in computational cost compared with the original simulator makes it a highly attractive tool which can be applied to the different spatio-temporal scales of transport networks.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43230324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A framework is proposed for generative models as a basis for digital twins or mirrors of structures. The proposal is based on the premise that deterministic models cannot account for the uncertainty present in most structural modeling applications. Two different types of generative models are considered here. The first is a physics-based model based on the stochastic finite element (SFE) method, which is widely used when modeling structures that have material and loading uncertainties imposed. Such models can be calibrated according to data from the structure and would be expected to outperform any other model if the modeling accurately captures the true underlying physics of the structure. The potential use of SFE models as digital mirrors is illustrated via application to a linear structure with stochastic material properties. For situations where the physical formulation of such models does not suffice, a data-driven framework is proposed, using machine learning and conditional generative adversarial networks (cGANs). The latter algorithm is used to learn the distribution of the quantity of interest in a structure with material nonlinearities and uncertainties. For the examples considered in this work, the data-driven cGANs model outperforms the physics-based approach. Finally, an example is shown where the two methods are coupled such that a hybrid model approach is demonstrated.
{"title":"On generative models as the basis for digital twins","authors":"G. Tsialiamanis, D. Wagg, N. Dervilis, K. Worden","doi":"10.1017/dce.2021.13","DOIUrl":"https://doi.org/10.1017/dce.2021.13","url":null,"abstract":"Abstract A framework is proposed for generative models as a basis for digital twins or mirrors of structures. The proposal is based on the premise that deterministic models cannot account for the uncertainty present in most structural modeling applications. Two different types of generative models are considered here. The first is a physics-based model based on the stochastic finite element (SFE) method, which is widely used when modeling structures that have material and loading uncertainties imposed. Such models can be calibrated according to data from the structure and would be expected to outperform any other model if the modeling accurately captures the true underlying physics of the structure. The potential use of SFE models as digital mirrors is illustrated via application to a linear structure with stochastic material properties. For situations where the physical formulation of such models does not suffice, a data-driven framework is proposed, using machine learning and conditional generative adversarial networks (cGANs). The latter algorithm is used to learn the distribution of the quantity of interest in a structure with material nonlinearities and uncertainties. For the examples considered in this work, the data-driven cGANs model outperforms the physics-based approach. Finally, an example is shown where the two methods are coupled such that a hybrid model approach is demonstrated.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48633324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Mathematical models are essential to analyze and understand the dynamics of complex systems. Recently, data-driven methodologies have gotten a lot of attention which is leveraged by advancements in sensor technology. However, the quality of obtained data plays a vital role in learning a good and reliable model. Therefore, in this paper, we propose an efficient heuristic methodology to collect data both in the frequency domain and the time domain, aiming at having more information gained from limited experimental data than equidistant points. In the frequency domain, the interpolation points are restricted to the imaginary axis as the transfer function can be estimated easily on the imaginary axis. The efficiency of the proposed methodology is illustrated by means of several examples, and its robustness in the presence of noisy data is shown.
{"title":"A greedy data collection scheme for linear dynamical systems","authors":"Karim Cherifi, P. Goyal, P. Benner","doi":"10.1017/dce.2022.16","DOIUrl":"https://doi.org/10.1017/dce.2022.16","url":null,"abstract":"Abstract Mathematical models are essential to analyze and understand the dynamics of complex systems. Recently, data-driven methodologies have gotten a lot of attention which is leveraged by advancements in sensor technology. However, the quality of obtained data plays a vital role in learning a good and reliable model. Therefore, in this paper, we propose an efficient heuristic methodology to collect data both in the frequency domain and the time domain, aiming at having more information gained from limited experimental data than equidistant points. In the frequency domain, the interpolation points are restricted to the imaginary axis as the transfer function can be estimated easily on the imaginary axis. The efficiency of the proposed methodology is illustrated by means of several examples, and its robustness in the presence of noisy data is shown.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46212147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Patient-specific surgical simulations require the patient-specific identification of the constitutive parameters. The sparsity of the experimental data and the substantial noise in the data (e.g., recovered during surgery) cause considerable uncertainty in the identification. In this exploratory work, parameter uncertainty for incompressible hyperelasticity, often used for soft tissues, is addressed by a probabilistic identification approach based on Bayesian inference. Our study particularly focuses on the uncertainty of the model: we investigate how the identified uncertainties of the constitutive parameters behave when different forms of model uncertainty are considered. The model uncertainty formulations range from uninformative ones to more accurate ones that incorporate more detailed extensions of incompressible hyperelasticity. The study shows that incorporating model uncertainty may improve the results, but this is not guaranteed.
{"title":"Bayesian model uncertainty quantification for hyperelastic soft tissue models","authors":"Milad Zeraatpisheh, S. Bordas, L. Beex","doi":"10.1017/dce.2021.9","DOIUrl":"https://doi.org/10.1017/dce.2021.9","url":null,"abstract":"Abstract Patient-specific surgical simulations require the patient-specific identification of the constitutive parameters. The sparsity of the experimental data and the substantial noise in the data (e.g., recovered during surgery) cause considerable uncertainty in the identification. In this exploratory work, parameter uncertainty for incompressible hyperelasticity, often used for soft tissues, is addressed by a probabilistic identification approach based on Bayesian inference. Our study particularly focuses on the uncertainty of the model: we investigate how the identified uncertainties of the constitutive parameters behave when different forms of model uncertainty are considered. The model uncertainty formulations range from uninformative ones to more accurate ones that incorporate more detailed extensions of incompressible hyperelasticity. The study shows that incorporating model uncertainty may improve the results, but this is not guaranteed.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2021.9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46163220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper studies the data-based polyhedron model and its application in uncertain linear optimization of engineering structures, especially in the absence of information either on probabilistic properties or about membership functions in the fussy sets-based approach, in which situation it is more appropriate to quantify the uncertainties by convex polyhedra. Firstly, we introduce the uncertainty quantification method of the convex polyhedron approach and the model modification method by Chebyshev inequality. Secondly, the characteristics of the optimal solution of convex polyhedron linear programming are investigated. Then the vertex solution of convex polyhedron linear programming is presented and proven. Next, the application of convex polyhedron linear programming in the static load-bearing capacity problem is introduced. Finally, the effectiveness of the vertex solution is verified by an example of the plane truss bearing problem, and the efficiency is verified by a load-bearing problem of stiffened composite plates.
{"title":"Data-based polyhedron model for optimization of engineering structures involving uncertainties","authors":"Z. Qiu, Han Wu, I. Elishakoff, Dongliang Liu","doi":"10.1017/dce.2021.8","DOIUrl":"https://doi.org/10.1017/dce.2021.8","url":null,"abstract":"Abstract This paper studies the data-based polyhedron model and its application in uncertain linear optimization of engineering structures, especially in the absence of information either on probabilistic properties or about membership functions in the fussy sets-based approach, in which situation it is more appropriate to quantify the uncertainties by convex polyhedra. Firstly, we introduce the uncertainty quantification method of the convex polyhedron approach and the model modification method by Chebyshev inequality. Secondly, the characteristics of the optimal solution of convex polyhedron linear programming are investigated. Then the vertex solution of convex polyhedron linear programming is presented and proven. Next, the application of convex polyhedron linear programming in the static load-bearing capacity problem is introduced. Finally, the effectiveness of the vertex solution is verified by an example of the plane truss bearing problem, and the efficiency is verified by a load-bearing problem of stiffened composite plates.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2021.8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43443264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abel Sancarlos, Morgan Cameron, Jean-Marc Le Peuvedic, J. Groulier, J. Duval, E. Cueto, F. Chinesta
Abstract The concept of “hybrid twin” (HT) has recently received a growing interest thanks to the availability of powerful machine learning techniques. This twin concept combines physics-based models within a model order reduction framework—to obtain real-time feedback rates—and data science. Thus, the main idea of the HT is to develop on-the-fly data-driven models to correct possible deviations between measurements and physics-based model predictions. This paper is focused on the computation of stable, fast, and accurate corrections in the HT framework. Furthermore, regarding the delicate and important problem of stability, a new approach is proposed, introducing several subvariants and guaranteeing a low computational cost as well as the achievement of a stable time-integration.
{"title":"Learning stable reduced-order models for hybrid twins","authors":"Abel Sancarlos, Morgan Cameron, Jean-Marc Le Peuvedic, J. Groulier, J. Duval, E. Cueto, F. Chinesta","doi":"10.1017/dce.2021.16","DOIUrl":"https://doi.org/10.1017/dce.2021.16","url":null,"abstract":"Abstract The concept of “hybrid twin” (HT) has recently received a growing interest thanks to the availability of powerful machine learning techniques. This twin concept combines physics-based models within a model order reduction framework—to obtain real-time feedback rates—and data science. Thus, the main idea of the HT is to develop on-the-fly data-driven models to correct possible deviations between measurements and physics-based model predictions. This paper is focused on the computation of stable, fast, and accurate corrections in the HT framework. Furthermore, regarding the delicate and important problem of stability, a new approach is proposed, introducing several subvariants and guaranteeing a low computational cost as well as the achievement of a stable time-integration.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44021702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}