Abstract In many complex practical optimization cases, the dominant characteristics of the problem are often not known prior. Therefore, there is a need to develop general solvers as it is not always possible to tailor a specialized approach to each application. The previously developed multilevel selection genetic algorithm (MLSGA) already shows good performance on a range of problems due to its diversity-first approach, which is rare among evolutionary algorithms. To increase the generality of its performance, this paper proposes utilization of multiple distinct evolutionary strategies simultaneously, similarly to algorithm selection, but with coevolutionary mechanisms between the subpopulations. This distinctive approach to coevolution provides less regular communication between subpopulations with competition between collectives rather than individuals. This encourages the collectives to act more independently creating a unique subregional search, leading to the development of coevolutionary MLSGA (cMLSGA). To test this methodology, nine genetic algorithms are selected to generate several variants of cMLSGA, which incorporates these approaches at the individual level. The mechanisms are tested on 100 different functions and benchmarked against the 9 state-of-the-art competitors to evaluate the generality of each approach. The results show that the diversity divergence in the principles of working of the selected coevolutionary approaches is more important than their individual performances. The proposed methodology has the most uniform performance on the divergent problem types, from across the tested state of the art, leading to an algorithm more likely to solve complex problems with limited knowledge about the search space, but is outperformed by more specialized solvers on simpler benchmarking studies.
{"title":"Coevolutionary strategies at the collective level for improved generalism","authors":"P. Grudniewski, A. Sobey","doi":"10.1017/dce.2023.1","DOIUrl":"https://doi.org/10.1017/dce.2023.1","url":null,"abstract":"Abstract In many complex practical optimization cases, the dominant characteristics of the problem are often not known prior. Therefore, there is a need to develop general solvers as it is not always possible to tailor a specialized approach to each application. The previously developed multilevel selection genetic algorithm (MLSGA) already shows good performance on a range of problems due to its diversity-first approach, which is rare among evolutionary algorithms. To increase the generality of its performance, this paper proposes utilization of multiple distinct evolutionary strategies simultaneously, similarly to algorithm selection, but with coevolutionary mechanisms between the subpopulations. This distinctive approach to coevolution provides less regular communication between subpopulations with competition between collectives rather than individuals. This encourages the collectives to act more independently creating a unique subregional search, leading to the development of coevolutionary MLSGA (cMLSGA). To test this methodology, nine genetic algorithms are selected to generate several variants of cMLSGA, which incorporates these approaches at the individual level. The mechanisms are tested on 100 different functions and benchmarked against the 9 state-of-the-art competitors to evaluate the generality of each approach. The results show that the diversity divergence in the principles of working of the selected coevolutionary approaches is more important than their individual performances. The proposed methodology has the most uniform performance on the divergent problem types, from across the tested state of the art, leading to an algorithm more likely to solve complex problems with limited knowledge about the search space, but is outperformed by more specialized solvers on simpler benchmarking studies.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49280812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Multiphase segmentation of pore-scale features and identification of mineralogy from digital images of materials is critical for many applications in the natural resources sector. However, the materials involved (rocks, catalyst pellets, and synthetic alloys) have complex and unpredictable composition. Algorithms that can be extended for multiphase segmentation of images of these materials are relatively few and very human-intensive. Challenges lie in designing algorithms that are context free, can function with less training data, and can handle the unpredictability of material composition. Semisupervised algorithms have shown success in classification in situations characterized by limited training data; they use unlabeled data in addition to labeled data to produce classification. The segmentation obtained can be more accurate than fully supervised learning approaches. This work proposes using a semisupervised clustering algorithm named Continuous Iterative Guided Spectral Class Rejection (CIGSCR) toward multiphase segmentation of digital scans of materials. CIGSCR harnesses spectral cohesion, splitting the intensity histogram of the input image down into clusters. This splitting provides the foundation for classification strategies that can be implemented as postprocessing steps to get the final segmentation. One classification strategy is presented. Micro-computed tomography scans of rocks are used to present the results. It is demonstrated that CIGSCR successfully enables distinguishing features up to the uniqueness of grayscale values, and extracting features present in full image stacks (3D), including features not presented in the training data. Results including instances of success and limitations are presented. Scalability to data sizes $ mathcal{O}left({10}^9right) $ voxels is briefly discussed.
{"title":"Multiphase segmentation of digital material images","authors":"R. Saxena, R. Day-Stirrat, Chaitanya Pradhan","doi":"10.1017/dce.2022.40","DOIUrl":"https://doi.org/10.1017/dce.2022.40","url":null,"abstract":"Abstract Multiphase segmentation of pore-scale features and identification of mineralogy from digital images of materials is critical for many applications in the natural resources sector. However, the materials involved (rocks, catalyst pellets, and synthetic alloys) have complex and unpredictable composition. Algorithms that can be extended for multiphase segmentation of images of these materials are relatively few and very human-intensive. Challenges lie in designing algorithms that are context free, can function with less training data, and can handle the unpredictability of material composition. Semisupervised algorithms have shown success in classification in situations characterized by limited training data; they use unlabeled data in addition to labeled data to produce classification. The segmentation obtained can be more accurate than fully supervised learning approaches. This work proposes using a semisupervised clustering algorithm named Continuous Iterative Guided Spectral Class Rejection (CIGSCR) toward multiphase segmentation of digital scans of materials. CIGSCR harnesses spectral cohesion, splitting the intensity histogram of the input image down into clusters. This splitting provides the foundation for classification strategies that can be implemented as postprocessing steps to get the final segmentation. One classification strategy is presented. Micro-computed tomography scans of rocks are used to present the results. It is demonstrated that CIGSCR successfully enables distinguishing features up to the uniqueness of grayscale values, and extracting features present in full image stacks (3D), including features not presented in the training data. Results including instances of success and limitations are presented. Scalability to data sizes $ mathcal{O}left({10}^9right) $ voxels is briefly discussed.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41905322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Data assimilation of flow measurements is an essential tool for extracting information in fluid dynamics problems. Recent works have shown that the physics-informed neural networks (PINNs) enable the reconstruction of unsteady fluid flows, governed by the Navier–Stokes equations, if the network is given enough flow measurements that are appropriately distributed in time and space. In many practical applications, however, experimental measurements involve only time-averaged quantities or their higher order statistics which are governed by the under-determined Reynolds-averaged Navier–Stokes (RANS) equations. In this study, we perform PINN-based reconstruction of time-averaged quantities of an unsteady flow from sparse velocity data. The applied technique leverages the time-averaged velocity data to infer unknown closure quantities (curl of unsteady RANS forcing), as well as to interpolate the fields from sparse measurements. Furthermore, the method’s capabilities are extended further to the assimilation of Reynolds stresses where PINNs successfully interpolate the data to complete the velocity as well as the stresses fields and gain insight into the pressure field of the investigated flow.
{"title":"Mean flow reconstruction of unsteady flows using physics-informed neural networks","authors":"Lukasz Sliwinski, Georgios Rigas","doi":"10.1017/dce.2022.37","DOIUrl":"https://doi.org/10.1017/dce.2022.37","url":null,"abstract":"Abstract Data assimilation of flow measurements is an essential tool for extracting information in fluid dynamics problems. Recent works have shown that the physics-informed neural networks (PINNs) enable the reconstruction of unsteady fluid flows, governed by the Navier–Stokes equations, if the network is given enough flow measurements that are appropriately distributed in time and space. In many practical applications, however, experimental measurements involve only time-averaged quantities or their higher order statistics which are governed by the under-determined Reynolds-averaged Navier–Stokes (RANS) equations. In this study, we perform PINN-based reconstruction of time-averaged quantities of an unsteady flow from sparse velocity data. The applied technique leverages the time-averaged velocity data to infer unknown closure quantities (curl of unsteady RANS forcing), as well as to interpolate the fields from sparse measurements. Furthermore, the method’s capabilities are extended further to the assimilation of Reynolds stresses where PINNs successfully interpolate the data to complete the velocity as well as the stresses fields and gain insight into the pressure field of the investigated flow.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42075393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Neaimeh, M. Deakin, Ryan Jenkinson, Oscar Giles
Abstract The uptake of electric vehicles (EVs) and renewable energy technologies is changing the magnitude, variability, and direction of power flows in electricity networks. To ensure a successful transition to a net zero energy system, it will be necessary for a wide range of stakeholders to understand the impacts of these changing flows on networks. However, there is a gap between those with the data and capabilities to understand electricity networks, such as network operators, and those working on adjacent parts of the energy transition jigsaw, such as electricity suppliers and EV charging infrastructure operators. This paper describes the electric vehicle network analysis tool (EVENT), developed to help make network analysis accessible to a wider range of stakeholders in the energy ecosystem who might not have the bandwidth to curate and integrate disparate datasets and carry out electricity network simulations. EVENT analyses the potential impacts of low-carbon technologies on congestion in electricity networks, helping to inform the design of products and services. To demonstrate EVENT’s potential, we use an extensive smart meter dataset provided by an energy supplier to assess the impacts of electricity smart tariffs on networks. Results suggest both network operators and energy suppliers will have to work much more closely together to ensure that the flexibility of customers to support the energy system can be maximized, while respecting safety and security constraints within networks. EVENT’s modular and open-source approach enables integration of new methods and data, future-proofing the tool for long-term impact.
{"title":"Democratizing electricity distribution network analysis","authors":"M. Neaimeh, M. Deakin, Ryan Jenkinson, Oscar Giles","doi":"10.1017/dce.2022.41","DOIUrl":"https://doi.org/10.1017/dce.2022.41","url":null,"abstract":"Abstract The uptake of electric vehicles (EVs) and renewable energy technologies is changing the magnitude, variability, and direction of power flows in electricity networks. To ensure a successful transition to a net zero energy system, it will be necessary for a wide range of stakeholders to understand the impacts of these changing flows on networks. However, there is a gap between those with the data and capabilities to understand electricity networks, such as network operators, and those working on adjacent parts of the energy transition jigsaw, such as electricity suppliers and EV charging infrastructure operators. This paper describes the electric vehicle network analysis tool (EVENT), developed to help make network analysis accessible to a wider range of stakeholders in the energy ecosystem who might not have the bandwidth to curate and integrate disparate datasets and carry out electricity network simulations. EVENT analyses the potential impacts of low-carbon technologies on congestion in electricity networks, helping to inform the design of products and services. To demonstrate EVENT’s potential, we use an extensive smart meter dataset provided by an energy supplier to assess the impacts of electricity smart tariffs on networks. Results suggest both network operators and energy suppliers will have to work much more closely together to ensure that the flexibility of customers to support the energy system can be maximized, while respecting safety and security constraints within networks. EVENT’s modular and open-source approach enables integration of new methods and data, future-proofing the tool for long-term impact.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42205391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qinyu Zhuang, Dirk Hartmann, H. Bungartz, Juan M Lorenzi
Abstract Model order reduction (MOR) can provide low-dimensional numerical models for fast simulation. Unlike intrusive methods, nonintrusive methods are attractive because they can be applied even without access to full order models (FOMs). Since nonintrusive MOR methods strongly rely on snapshots of the FOMs, constructing good snapshot sets becomes crucial. In this work, we propose a novel active-learning-based approach for use in conjunction with nonintrusive MOR methods. It is based on two crucial novelties. First, our approach uses joint space sampling to prepare a data pool of the training data. The training data are selected from the data pool using a greedy strategy supported by an error estimator based on Gaussian process regression. Second, we introduce a case-independent validation strategy based on probably approximately correct learning. While the methods proposed here can be applied to different MOR methods, we test them here with artificial neural networks and operator inference.
{"title":"Active-learning-based nonintrusive model order reduction","authors":"Qinyu Zhuang, Dirk Hartmann, H. Bungartz, Juan M Lorenzi","doi":"10.1017/dce.2022.39","DOIUrl":"https://doi.org/10.1017/dce.2022.39","url":null,"abstract":"Abstract Model order reduction (MOR) can provide low-dimensional numerical models for fast simulation. Unlike intrusive methods, nonintrusive methods are attractive because they can be applied even without access to full order models (FOMs). Since nonintrusive MOR methods strongly rely on snapshots of the FOMs, constructing good snapshot sets becomes crucial. In this work, we propose a novel active-learning-based approach for use in conjunction with nonintrusive MOR methods. It is based on two crucial novelties. First, our approach uses joint space sampling to prepare a data pool of the training data. The training data are selected from the data pool using a greedy strategy supported by an error estimator based on Gaussian process regression. Second, we introduce a case-independent validation strategy based on probably approximately correct learning. While the methods proposed here can be applied to different MOR methods, we test them here with artificial neural networks and operator inference.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46974870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this perspective, I give my answer to the question of how quantum computing will impact on data-intensive applications in engineering and science. I focus on quantum Monte Carlo integration as a likely source of (relatively) near-term quantum advantage, but also discuss some other ideas that have garnered widespread interest.
{"title":"Quantum computing for data-centric engineering and science","authors":"Steven Herbert","doi":"10.1017/dce.2022.36","DOIUrl":"https://doi.org/10.1017/dce.2022.36","url":null,"abstract":"Abstract In this perspective, I give my answer to the question of how quantum computing will impact on data-intensive applications in engineering and science. I focus on quantum Monte Carlo integration as a likely source of (relatively) near-term quantum advantage, but also discuss some other ideas that have garnered widespread interest.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45155051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joaquin Bilbao, E. Lourens, A. Schulze, L. Ziegler
Abstract Wind turbine towers are subjected to highly varying internal loads, characterized by large uncertainty. The uncertainty stems from many factors, including what the actual wind fields experienced over time will be, modeling uncertainties given the various operational states of the turbine with and without controller interaction, the influence of aerodynamic damping, and so forth. To monitor the true experienced loading and assess the fatigue, strain sensors can be installed at fatigue-critical locations on the turbine structure. A more cost-effective and practical solution is to predict the strain response of the structure based only on a number of acceleration measurements. In this contribution, an approach is followed where the dynamic strains in an existing onshore wind turbine tower are predicted using a Gaussian process latent force model. By employing this model, both the applied dynamic loading and strain response are estimated based on the acceleration data. The predicted dynamic strains are validated using strain gauges installed near the bottom of the tower. Fatigue is subsequently assessed by comparing the damage equivalent loads calculated with the predicted as opposed to the measured strains. The results confirm the usefulness of the method for continuous tracking of fatigue life consumption in onshore wind turbine towers.
{"title":"Virtual sensing in an onshore wind turbine tower using a Gaussian process latent force model","authors":"Joaquin Bilbao, E. Lourens, A. Schulze, L. Ziegler","doi":"10.1017/dce.2022.38","DOIUrl":"https://doi.org/10.1017/dce.2022.38","url":null,"abstract":"Abstract Wind turbine towers are subjected to highly varying internal loads, characterized by large uncertainty. The uncertainty stems from many factors, including what the actual wind fields experienced over time will be, modeling uncertainties given the various operational states of the turbine with and without controller interaction, the influence of aerodynamic damping, and so forth. To monitor the true experienced loading and assess the fatigue, strain sensors can be installed at fatigue-critical locations on the turbine structure. A more cost-effective and practical solution is to predict the strain response of the structure based only on a number of acceleration measurements. In this contribution, an approach is followed where the dynamic strains in an existing onshore wind turbine tower are predicted using a Gaussian process latent force model. By employing this model, both the applied dynamic loading and strain response are estimated based on the acceleration data. The predicted dynamic strains are validated using strain gauges installed near the bottom of the tower. Fatigue is subsequently assessed by comparing the damage equivalent loads calculated with the predicted as opposed to the measured strains. The results confirm the usefulness of the method for continuous tracking of fatigue life consumption in onshore wind turbine towers.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46747393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Model-based systems engineering (MBSE) aims at creating a model of a system under development, covering the complete system with a level of detail that allows to define and understand its behavior and enables to define any interface and work package based on the model. Once the model is established, further benefits can be reaped, such as the analysis of complex technical correlations within the system. Various insights can be gained by displaying the model as a formal graph and querying it. To enable such queries, a graph schema is necessary, which allows to transfer the model into a graph database. In the course of this paper, we discuss the design of a graph schema and MBSE modeling approach, enabling deep going system analysis and anomaly resolution in complex embedded systems with a focus on testing and anomaly resolution. The schema and modeling approach are designed to answer questions such as What happens if there is an electrical short in a component? Which other components are now offline and which data cannot be gathered anymore? If a component becomes unresponsive, which alternative routes can be established to obtain data processed by it. We build on the use case of qualification and operations of a small spacecraft. Structural elements of the MBSE model are transferred to a graph database where analyses are conducted on the system. The schema is implemented by means of an adapter for MagicDraw to Neo4J. A selection of complex analyses is shown in the example of the MOVE-II space mission.
{"title":"An approach for system analysis with model-based systems engineering and graph data engineering","authors":"F. Schummer, Maximillian Hyba","doi":"10.1017/dce.2022.33","DOIUrl":"https://doi.org/10.1017/dce.2022.33","url":null,"abstract":"Abstract Model-based systems engineering (MBSE) aims at creating a model of a system under development, covering the complete system with a level of detail that allows to define and understand its behavior and enables to define any interface and work package based on the model. Once the model is established, further benefits can be reaped, such as the analysis of complex technical correlations within the system. Various insights can be gained by displaying the model as a formal graph and querying it. To enable such queries, a graph schema is necessary, which allows to transfer the model into a graph database. In the course of this paper, we discuss the design of a graph schema and MBSE modeling approach, enabling deep going system analysis and anomaly resolution in complex embedded systems with a focus on testing and anomaly resolution. The schema and modeling approach are designed to answer questions such as What happens if there is an electrical short in a component? Which other components are now offline and which data cannot be gathered anymore? If a component becomes unresponsive, which alternative routes can be established to obtain data processed by it. We build on the use case of qualification and operations of a small spacecraft. Structural elements of the MBSE model are transferred to a graph database where analyses are conducted on the system. The schema is implemented by means of an adapter for MagicDraw to Neo4J. A selection of complex analyses is shown in the example of the MOVE-II space mission.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49531745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Kreitmair, N. Makasis, K. Menberg, A. Bidarmaghz, G. Farr, D. Boon, R. Choudhary
Abstract Understanding the subsurface is crucial in building a sustainable future, particularly for urban centers. Importantly, the thermal effects that anthropogenic infrastructure, such as buildings, tunnels, and ground heat exchangers, can have on this shared resource need to be well understood to avoid issues, such as overheating the ground, and to identify opportunities, such as extracting and utilizing excess heat. However, obtaining data for the subsurface can be costly, typically requiring the drilling of boreholes. Bayesian statistical methodologies can be used towards overcoming this, by inferring information about the ground by combining field data and numerical modeling, while quantifying associated uncertainties. This work utilizes data obtained in the city of Cardiff, UK, to evaluate the applicability of a Bayesian calibration (using GP surrogates) approach to measured data and associated challenges (previously not tested) and to obtain insights on the subsurface of the area. The importance of the data set size is analyzed, showing that more data are required in realistic (field data), compared to controlled conditions (numerically-generated data), highlighting the importance of identifying data points that contain the most information. Heterogeneity of the ground (i.e., input parameters), which can be particularly prominent in large-scale subsurface domains, is also investigated, showing that the calibration methodology can still yield reasonably accurate results under heterogeneous conditions. Finally, the impact of considering uncertainty in subsurface properties is demonstrated in an existing shallow geothermal system in the area, showing a higher than utilized ground capacity, and the potential for a larger scale system given sufficient demand.
{"title":"Bayesian parameter inference for shallow subsurface modeling using field data and impacts on geothermal planning","authors":"M. Kreitmair, N. Makasis, K. Menberg, A. Bidarmaghz, G. Farr, D. Boon, R. Choudhary","doi":"10.1017/dce.2022.32","DOIUrl":"https://doi.org/10.1017/dce.2022.32","url":null,"abstract":"Abstract Understanding the subsurface is crucial in building a sustainable future, particularly for urban centers. Importantly, the thermal effects that anthropogenic infrastructure, such as buildings, tunnels, and ground heat exchangers, can have on this shared resource need to be well understood to avoid issues, such as overheating the ground, and to identify opportunities, such as extracting and utilizing excess heat. However, obtaining data for the subsurface can be costly, typically requiring the drilling of boreholes. Bayesian statistical methodologies can be used towards overcoming this, by inferring information about the ground by combining field data and numerical modeling, while quantifying associated uncertainties. This work utilizes data obtained in the city of Cardiff, UK, to evaluate the applicability of a Bayesian calibration (using GP surrogates) approach to measured data and associated challenges (previously not tested) and to obtain insights on the subsurface of the area. The importance of the data set size is analyzed, showing that more data are required in realistic (field data), compared to controlled conditions (numerically-generated data), highlighting the importance of identifying data points that contain the most information. Heterogeneity of the ground (i.e., input parameters), which can be particularly prominent in large-scale subsurface domains, is also investigated, showing that the calibration methodology can still yield reasonably accurate results under heterogeneous conditions. Finally, the impact of considering uncertainty in subsurface properties is demonstrated in an existing shallow geothermal system in the area, showing a higher than utilized ground capacity, and the potential for a larger scale system given sufficient demand.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44698552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Freddie Markanday, G. Conduit, B. Conduit, J. Pürstl, K. Christofidou, L. Chechik, G. Baxter, C. Heason, H. Stone
Abstract A neural network framework is used to design a new Ni-based superalloy that surpasses the performance of IN718 for laser-blown-powder directed-energy-deposition repair applications. The framework utilized a large database comprising physical and thermodynamic properties for different alloy compositions to learn both composition to property and also property to property relationships. The alloy composition space was based on IN718, although, W was additionally included and the limiting Al and Co content were allowed to increase compared standard IN718, thereby allowing the alloy to approach the composition of ATI 718Plus® (718Plus). The composition with the highest probability of satisfying target properties including phase stability, solidification strain, and tensile strength was identified. The alloy was fabricated, and the properties were experimentally investigated. The testing confirms that this alloy offers advantages for additive repair applications over standard IN718.
{"title":"Design of a Ni-based superalloy for laser repair applications using probabilistic neural network identification","authors":"Freddie Markanday, G. Conduit, B. Conduit, J. Pürstl, K. Christofidou, L. Chechik, G. Baxter, C. Heason, H. Stone","doi":"10.1017/dce.2022.31","DOIUrl":"https://doi.org/10.1017/dce.2022.31","url":null,"abstract":"Abstract A neural network framework is used to design a new Ni-based superalloy that surpasses the performance of IN718 for laser-blown-powder directed-energy-deposition repair applications. The framework utilized a large database comprising physical and thermodynamic properties for different alloy compositions to learn both composition to property and also property to property relationships. The alloy composition space was based on IN718, although, W was additionally included and the limiting Al and Co content were allowed to increase compared standard IN718, thereby allowing the alloy to approach the composition of ATI 718Plus® (718Plus). The composition with the highest probability of satisfying target properties including phase stability, solidification strain, and tensile strength was identified. The alloy was fabricated, and the properties were experimentally investigated. The testing confirms that this alloy offers advantages for additive repair applications over standard IN718.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49433968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}