A. Chadzynski, Shiying Li, Ayda Grisiute, Jefferson Chua, Markus Hofmeister, Jingya Yan, Huay Yi Tai, Emily Lloyd, Yi Kai Tsai, Mehal Agarwal, J. Akroyd, P. Herthogs, Markus Kraft
Abstract This article presents a system architecture and a set of interfaces that can build scalable information systems capable of large city modeling based on dynamic geospatial knowledge graphs to avoid pitfalls of Web 2.0 applications while blending artificial and human intelligence during the knowledge enhancement processes. We designed and developed a GeoSpatial Processor, an SQL2SPARQL Transformer, and a geospatial tiles ordering tasks and integrated them into a City Export Agent to visualize and interact with city models on an augmented 3D web client. We designed a Thematic Surface Discovery Agent to automatically upgrade the model’s level of detail to interact with thematic parts of city objects by other agents. We developed a City Information Agent to help retrieve contextual information, provide data concerning city regulations, and work with a City Energy Analyst Agent that automatically estimates the energy demands for city model members. We designed a Distance Agent to track the interactions with the model members on the web, calculate distances between objects of interest, and add new knowledge to the Cities Knowledge Graph. The logical foundations and CityGML-based conceptual schema used to describe cities in terms of the OntoCityGML ontology, together with the system of intelligent autonomous agents based on the J-Park Simulator Agent Framework, make such systems capable of assessing and maintaining ground truths with certainty. This new era of GeoWeb 2.5 systems lowers the risk of deliberate misinformation within geography web systems used for modeling critical infrastructures.
{"title":"Semantic 3D city interfaces—Intelligent interactions on dynamic geospatial knowledge graphs","authors":"A. Chadzynski, Shiying Li, Ayda Grisiute, Jefferson Chua, Markus Hofmeister, Jingya Yan, Huay Yi Tai, Emily Lloyd, Yi Kai Tsai, Mehal Agarwal, J. Akroyd, P. Herthogs, Markus Kraft","doi":"10.1017/dce.2023.14","DOIUrl":"https://doi.org/10.1017/dce.2023.14","url":null,"abstract":"Abstract This article presents a system architecture and a set of interfaces that can build scalable information systems capable of large city modeling based on dynamic geospatial knowledge graphs to avoid pitfalls of Web 2.0 applications while blending artificial and human intelligence during the knowledge enhancement processes. We designed and developed a GeoSpatial Processor, an SQL2SPARQL Transformer, and a geospatial tiles ordering tasks and integrated them into a City Export Agent to visualize and interact with city models on an augmented 3D web client. We designed a Thematic Surface Discovery Agent to automatically upgrade the model’s level of detail to interact with thematic parts of city objects by other agents. We developed a City Information Agent to help retrieve contextual information, provide data concerning city regulations, and work with a City Energy Analyst Agent that automatically estimates the energy demands for city model members. We designed a Distance Agent to track the interactions with the model members on the web, calculate distances between objects of interest, and add new knowledge to the Cities Knowledge Graph. The logical foundations and CityGML-based conceptual schema used to describe cities in terms of the OntoCityGML ontology, together with the system of intelligent autonomous agents based on the J-Park Simulator Agent Framework, make such systems capable of assessing and maintaining ground truths with certainty. This new era of GeoWeb 2.5 systems lowers the risk of deliberate misinformation within geography web systems used for modeling critical infrastructures.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43490090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Josh W. Nevin, E. Sillekens, Ronit Sohanpal, L. Galdino, Sam Nallaperuma, P. Bayvel, S. Savory
Abstract We present a novel methodology for optimizing fiber optic network performance by determining the ideal values for attenuation, nonlinearity, and dispersion parameters in terms of achieved signal-to-noise ratio (SNR) gain from digital backpropagation (DBP). Our approach uses Gaussian process regression, a probabilistic machine learning technique, to create a computationally efficient model for mapping these parameters to the resulting SNR after applying DBP. We then use simplicial homology global optimization to find the parameter values that yield maximum SNR for the Gaussian process model within a set of a priori bounds. This approach optimizes the parameters in terms of the DBP gain at the receiver. We demonstrate the effectiveness of our method through simulation and experimental testing, achieving optimal estimates of the dispersion, nonlinearity, and attenuation parameters. Our approach also highlights the limitations of traditional one-at-a-time grid search methods and emphasizes the interpretability of the technique. This methodology has broad applications in engineering and can be used to optimize performance in various systems beyond optical networks.
{"title":"Optical network physical layer parameter optimization for digital backpropagation using Gaussian processes","authors":"Josh W. Nevin, E. Sillekens, Ronit Sohanpal, L. Galdino, Sam Nallaperuma, P. Bayvel, S. Savory","doi":"10.1017/dce.2023.15","DOIUrl":"https://doi.org/10.1017/dce.2023.15","url":null,"abstract":"Abstract We present a novel methodology for optimizing fiber optic network performance by determining the ideal values for attenuation, nonlinearity, and dispersion parameters in terms of achieved signal-to-noise ratio (SNR) gain from digital backpropagation (DBP). Our approach uses Gaussian process regression, a probabilistic machine learning technique, to create a computationally efficient model for mapping these parameters to the resulting SNR after applying DBP. We then use simplicial homology global optimization to find the parameter values that yield maximum SNR for the Gaussian process model within a set of a priori bounds. This approach optimizes the parameters in terms of the DBP gain at the receiver. We demonstrate the effectiveness of our method through simulation and experimental testing, achieving optimal estimates of the dispersion, nonlinearity, and attenuation parameters. Our approach also highlights the limitations of traditional one-at-a-time grid search methods and emphasizes the interpretability of the technique. This methodology has broad applications in engineering and can be used to optimize performance in various systems beyond optical networks.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44814367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract While finite element (FE) modeling is widely used for ultimate strength assessments of structural systems, incorporating complex distortions and imperfections into FE models remains a challenge. Conventional methods typically rely on assumptions about the periodicity of distortions through spectral or modal methods. However, these approaches are not viable under the many realistic scenarios where these assumptions are invalid. Research efforts have consistently demonstrated the ability of point cloud data, generated through laser scanning or photogrammetry-based methods, to accurately capture structural deformations at the millimeter scale. This enables the updating of numerical models to capture the exact structural configuration and initial imperfections without the need for unrealistic assumptions. This research article investigates the use of point cloud data for updating the initial distortions in a FE model of a stiffened ship deck panel, for the purposes of ultimate strength estimation. The presented approach has the additional benefit of being able to explicitly account for measurement uncertainty in the analysis. Calculations using the updated FE models are compared against ground truth test data as well as FE models updated using standard spectral methods. The results demonstrate strength estimation that is comparable to existing approaches, with the additional advantages of uncertainty quantification and applicability to a wider range of application scenarios.
{"title":"Finite element model updating with quantified uncertainties using point cloud data","authors":"W. Graves, K. Nahshon, K. Aminfar, D. Lattanzi","doi":"10.1017/dce.2023.7","DOIUrl":"https://doi.org/10.1017/dce.2023.7","url":null,"abstract":"Abstract While finite element (FE) modeling is widely used for ultimate strength assessments of structural systems, incorporating complex distortions and imperfections into FE models remains a challenge. Conventional methods typically rely on assumptions about the periodicity of distortions through spectral or modal methods. However, these approaches are not viable under the many realistic scenarios where these assumptions are invalid. Research efforts have consistently demonstrated the ability of point cloud data, generated through laser scanning or photogrammetry-based methods, to accurately capture structural deformations at the millimeter scale. This enables the updating of numerical models to capture the exact structural configuration and initial imperfections without the need for unrealistic assumptions. This research article investigates the use of point cloud data for updating the initial distortions in a FE model of a stiffened ship deck panel, for the purposes of ultimate strength estimation. The presented approach has the additional benefit of being able to explicitly account for measurement uncertainty in the analysis. Calculations using the updated FE models are compared against ground truth test data as well as FE models updated using standard spectral methods. The results demonstrate strength estimation that is comparable to existing approaches, with the additional advantages of uncertainty quantification and applicability to a wider range of application scenarios.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43588518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Maritime engineering relies on model forecasts for many different processes, including meteorological and oceanographic forcings, structural responses, and energy demands. Understanding the performance and evaluation of such forecasting models is crucial in instilling reliability in maritime operations. Evaluation metrics that assess the point accuracy of the forecast (such as root-mean-squared error) are commonplace, but with the increased uptake of probabilistic forecasting methods such evaluation metrics may not consider the full forecasting distribution. The statistical theory of proper scoring rules provides a framework in which to score and compare competing probabilistic forecasts, but it is seldom appealed to in applications. This translational paper presents the underlying theory and principles of proper scoring rules, develops a simple panel of rules that may be used to robustly evaluate the performance of competing probabilistic forecasts, and demonstrates this with an application to forecasting surface winds at an asset on Australia’s North–West Shelf. Where appropriate, we relate the statistical theory to common requirements by maritime engineering industry. The case study is from a body of work that was undertaken to quantify the value resulting from an operational forecasting product and is a clear demonstration of the downstream impacts that statistical and data science methods can have in maritime engineering operations.
{"title":"Evaluating probabilistic forecasts for maritime engineering operations","authors":"L. Astfalck, Michael Bertolacci, E. Cripps","doi":"10.1017/dce.2023.11","DOIUrl":"https://doi.org/10.1017/dce.2023.11","url":null,"abstract":"Abstract Maritime engineering relies on model forecasts for many different processes, including meteorological and oceanographic forcings, structural responses, and energy demands. Understanding the performance and evaluation of such forecasting models is crucial in instilling reliability in maritime operations. Evaluation metrics that assess the point accuracy of the forecast (such as root-mean-squared error) are commonplace, but with the increased uptake of probabilistic forecasting methods such evaluation metrics may not consider the full forecasting distribution. The statistical theory of proper scoring rules provides a framework in which to score and compare competing probabilistic forecasts, but it is seldom appealed to in applications. This translational paper presents the underlying theory and principles of proper scoring rules, develops a simple panel of rules that may be used to robustly evaluate the performance of competing probabilistic forecasts, and demonstrates this with an application to forecasting surface winds at an asset on Australia’s North–West Shelf. Where appropriate, we relate the statistical theory to common requirements by maritime engineering industry. The case study is from a body of work that was undertaken to quantify the value resulting from an operational forecasting product and is a clear demonstration of the downstream impacts that statistical and data science methods can have in maritime engineering operations.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":"55 3-4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41297564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Reliable short-term load forecasting is vital for the planning and operation of electric power systems. Short-term load forecasting is a critical component used in purchasing and generating electric power, dispatching, and load switching, which is essential for balancing supply and demand and mitigating the risk of power shortages. This is becoming even more critical given the transition to carbon-neutral technologies in the energy sector. Specifically, since renewable sources are inherently uncertain, a distributed energy system with renewable generation units is more heavily dependent on accurate load forecasts for demand-response management than traditional energy sectors. Despite extensive literature on forecasting electricity demand, most studies focus on predicting the total demand solely based on the previous-step observations of aggregate demand. With advances in smart-metering technology and the availability of high-resolution consumption data, harnessing fine-resolution smart-meter data in load forecasting has attracted increasing attention. Studies using smart-meter data mainly involve a “bottom-up” approach that develops separate forecast models at sub-aggregate levels and aggregates the forecasts to estimate the total demand. While this approach is conducive to incorporating fine-resolution data for load forecasting, it has several shortcomings that can result in sub-optimal forecasts. However, these shortcomings are hardly acknowledged in the load forecasting literature. This work demonstrates how limitations imposed by such a bottom-up load forecasting approach can lead to misleading results, which could hamper efficient load management within a carbon-neutral grid.
{"title":"Bottom-up forecasting: Applications and limitations in load forecasting using smart-meter data","authors":"Harsh Anand, R. Nateghi, Negin Alemazkoor","doi":"10.1017/dce.2023.10","DOIUrl":"https://doi.org/10.1017/dce.2023.10","url":null,"abstract":"Abstract Reliable short-term load forecasting is vital for the planning and operation of electric power systems. Short-term load forecasting is a critical component used in purchasing and generating electric power, dispatching, and load switching, which is essential for balancing supply and demand and mitigating the risk of power shortages. This is becoming even more critical given the transition to carbon-neutral technologies in the energy sector. Specifically, since renewable sources are inherently uncertain, a distributed energy system with renewable generation units is more heavily dependent on accurate load forecasts for demand-response management than traditional energy sectors. Despite extensive literature on forecasting electricity demand, most studies focus on predicting the total demand solely based on the previous-step observations of aggregate demand. With advances in smart-metering technology and the availability of high-resolution consumption data, harnessing fine-resolution smart-meter data in load forecasting has attracted increasing attention. Studies using smart-meter data mainly involve a “bottom-up” approach that develops separate forecast models at sub-aggregate levels and aggregates the forecasts to estimate the total demand. While this approach is conducive to incorporating fine-resolution data for load forecasting, it has several shortcomings that can result in sub-optimal forecasts. However, these shortcomings are hardly acknowledged in the load forecasting literature. This work demonstrates how limitations imposed by such a bottom-up load forecasting approach can lead to misleading results, which could hamper efficient load management within a carbon-neutral grid.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46158744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The exploitation of hydrocarbon reservoirs may potentially lead to contamination of soils, shallow water resources, and greenhouse gas emissions. Fluids such as methane or CO2 may in some cases migrate toward the groundwater zone and atmosphere through and along imperfectly sealed hydrocarbon wells. Field tests in hydrocarbon-producing regions are routinely conducted for detecting serious leakage to prevent environmental pollution. The challenge is that testing is costly, time-consuming, and sometimes labor-intensive. In this study, machine learning approaches were applied to predict serious leakage with uncertainty quantification for wells that have not been field tested in Alberta, Canada. An improved imputation technique was developed by Cholesky factorization of the covariance matrix between features, where missing data are imputed via conditioning of available values. The uncertainty in imputed values was quantified and incorporated into the final prediction to improve decision-making. Next, a wide range of predictive algorithms and various performance metrics were considered to achieve the most reliable classifier. However, a highly skewed distribution of field tests toward the negative class (nonserious leakage) forces predictive models to unrealistically underestimate the minority class (serious leakage). To address this issue, a combination of oversampling, undersampling, and ensemble learning was applied. By investigating all the models on never-before-seen data, an optimum classifier with minimal false negative prediction was determined. The developed methodology can be applied to identify the wells with the highest likelihood for serious fluid leakage within producing fields. This information is of key importance for optimizing field test operations to achieve economic and environmental benefits.
{"title":"Machine learning approaches for the prediction of serious fluid leakage from hydrocarbon wells","authors":"Mehdi Rezvandehy, B. Mayer","doi":"10.1017/dce.2023.9","DOIUrl":"https://doi.org/10.1017/dce.2023.9","url":null,"abstract":"Abstract The exploitation of hydrocarbon reservoirs may potentially lead to contamination of soils, shallow water resources, and greenhouse gas emissions. Fluids such as methane or CO2 may in some cases migrate toward the groundwater zone and atmosphere through and along imperfectly sealed hydrocarbon wells. Field tests in hydrocarbon-producing regions are routinely conducted for detecting serious leakage to prevent environmental pollution. The challenge is that testing is costly, time-consuming, and sometimes labor-intensive. In this study, machine learning approaches were applied to predict serious leakage with uncertainty quantification for wells that have not been field tested in Alberta, Canada. An improved imputation technique was developed by Cholesky factorization of the covariance matrix between features, where missing data are imputed via conditioning of available values. The uncertainty in imputed values was quantified and incorporated into the final prediction to improve decision-making. Next, a wide range of predictive algorithms and various performance metrics were considered to achieve the most reliable classifier. However, a highly skewed distribution of field tests toward the negative class (nonserious leakage) forces predictive models to unrealistically underestimate the minority class (serious leakage). To address this issue, a combination of oversampling, undersampling, and ensemble learning was applied. By investigating all the models on never-before-seen data, an optimum classifier with minimal false negative prediction was determined. The developed methodology can be applied to identify the wells with the highest likelihood for serious fluid leakage within producing fields. This information is of key importance for optimizing field test operations to achieve economic and environmental benefits.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44624952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jessica C. Forsdyke, Bahdan Zviazhynski, J. Lees, G. Conduit
Abstract Development of robust concrete mixes with a lower environmental impact is challenging due to natural variability in constituent materials and a multitude of possible combinations of mix proportions. Making reliable property predictions with machine learning can facilitate performance-based specification of concrete, reducing material inefficiencies and improving the sustainability of concrete construction. In this work, we develop a machine learning algorithm that can utilize intermediate target variables and their associated noise to predict the final target variable. We apply the methodology to specify a concrete mix that has high resistance to carbonation, and another concrete mix that has low environmental impact. Both mixes also fulfill targets on the strength, density, and cost. The specified mixes are experimentally validated against their predictions. Our generic methodology enables the exploitation of noise in machine learning, which has a broad range of applications in structural engineering and beyond.
{"title":"Probabilistic selection and design of concrete using machine learning","authors":"Jessica C. Forsdyke, Bahdan Zviazhynski, J. Lees, G. Conduit","doi":"10.1017/dce.2023.5","DOIUrl":"https://doi.org/10.1017/dce.2023.5","url":null,"abstract":"Abstract Development of robust concrete mixes with a lower environmental impact is challenging due to natural variability in constituent materials and a multitude of possible combinations of mix proportions. Making reliable property predictions with machine learning can facilitate performance-based specification of concrete, reducing material inefficiencies and improving the sustainability of concrete construction. In this work, we develop a machine learning algorithm that can utilize intermediate target variables and their associated noise to predict the final target variable. We apply the methodology to specify a concrete mix that has high resistance to carbonation, and another concrete mix that has low environmental impact. Both mixes also fulfill targets on the strength, density, and cost. The specified mixes are experimentally validated against their predictions. Our generic methodology enables the exploitation of noise in machine learning, which has a broad range of applications in structural engineering and beyond.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46615250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The International Maritime Organization along with couple European countries (Paris MoU) has introduced in 1982 the port state control (PSC) inspections of vessels in national ports to evaluate their compliance with safety and security regulations. This study discusses how the PSC data share common characteristics with Big Data fundamental theories, and by interpreting them as Big Data, we could enjoy their governance and transparency as a Big Data challenge to gain value from their use. Thus, from the scope of Big Data, PSC should exhibit volume, velocity, variety, value, and complexity to support in the best possible way both officers ashore and on board to maintain the vessel in the best possible conditions for sailing. For the above purpose, this paper employs Big Data theories broadly used within the academic and business environment on datasets characteristics and how to access the value from Big Data and Analytics. The research concludes that PSC data provide valid information to the shipping industry. However, the lack of PSC data ability to present the complete picture of PSC regimes and ports challenges the maritime community’s attempts for a safer and more sustainable industry.
{"title":"Leveraging Big Data in port state control: An analysis of port state control data and its potential for governance and transparency in the shipping industry","authors":"D. Ampatzidis","doi":"10.1017/dce.2023.6","DOIUrl":"https://doi.org/10.1017/dce.2023.6","url":null,"abstract":"Abstract The International Maritime Organization along with couple European countries (Paris MoU) has introduced in 1982 the port state control (PSC) inspections of vessels in national ports to evaluate their compliance with safety and security regulations. This study discusses how the PSC data share common characteristics with Big Data fundamental theories, and by interpreting them as Big Data, we could enjoy their governance and transparency as a Big Data challenge to gain value from their use. Thus, from the scope of Big Data, PSC should exhibit volume, velocity, variety, value, and complexity to support in the best possible way both officers ashore and on board to maintain the vessel in the best possible conditions for sailing. For the above purpose, this paper employs Big Data theories broadly used within the academic and business environment on datasets characteristics and how to access the value from Big Data and Analytics. The research concludes that PSC data provide valid information to the shipping industry. However, the lack of PSC data ability to present the complete picture of PSC regimes and ports challenges the maritime community’s attempts for a safer and more sustainable industry.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47190164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract As the Reynolds number increases, the large-eddy simulation (LES) of complex flows becomes increasingly intractable because near-wall turbulent structures become increasingly small. Wall modeling reduces the computational requirements of LES by enabling the use of coarser cells at the walls. This paper presents a machine-learning methodology to develop data-driven wall-shear-stress models that can directly operate, a posteriori, on the unstructured grid of the simulation. The model architecture is based on graph neural networks. The model is trained on a database which includes fully developed boundary layers, adverse pressure gradients, separated boundary layers, and laminar–turbulent transition. The relevance of the trained model is verified a posteriori for the simulation of a channel flow, a backward-facing step and a linear blade cascade.
{"title":"Modeling the wall shear stress in large-eddy simulation using graph neural networks","authors":"D. Dupuy, N. Odier, C. Lapeyre, D. Papadogiannis","doi":"10.1017/dce.2023.2","DOIUrl":"https://doi.org/10.1017/dce.2023.2","url":null,"abstract":"Abstract As the Reynolds number increases, the large-eddy simulation (LES) of complex flows becomes increasingly intractable because near-wall turbulent structures become increasingly small. Wall modeling reduces the computational requirements of LES by enabling the use of coarser cells at the walls. This paper presents a machine-learning methodology to develop data-driven wall-shear-stress models that can directly operate, a posteriori, on the unstructured grid of the simulation. The model architecture is based on graph neural networks. The model is trained on a database which includes fully developed boundary layers, adverse pressure gradients, separated boundary layers, and laminar–turbulent transition. The relevance of the trained model is verified a posteriori for the simulation of a channel flow, a backward-facing step and a linear blade cascade.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46992515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract An approach for the identification of discontinuous and nonsmooth nonlinear forces, as those generated by frictional contacts, in mechanical systems that can be approximated by a single-degree-of-freedom model is presented. To handle the sharp variations and multiple motion regimes introduced by these nonlinearities in the dynamic response, the partially known physics-based model and noisy measurements of the system’s response to a known input force are combined within a switching Gaussian process latent force model (GPLFM). In this grey-box framework, multiple Gaussian processes are used to model the unknown nonlinear force across different motion regimes and a resetting model enables the generation of discontinuities. The states of the system, nonlinear force, and regime transitions are inferred by using filtering and smoothing techniques for switching linear dynamical systems. The proposed switching GPLFM is applied to a simulated dry friction oscillator and an experimental setup consisting of a single-storey frame with a brass-to-steel contact. Excellent results are obtained in terms of the identified nonlinear and discontinuous friction force for varying: (i) normal load amplitudes in the contact; (ii) measurement noise levels, and (iii) number of samples in the datasets. Moreover, the identified states, friction force, and sequence of motion regimes are used for evaluating: (1) uncertain system parameters; (2) the friction force–velocity relationship, and (3) the static friction force. The correct identification of the discontinuous nonlinear force and the quantification of any remaining uncertainty in its prediction enable the implementation of an accurate forward model able to predict the system’s response to different input forces.
{"title":"A switching Gaussian process latent force model for the identification of mechanical systems with a discontinuous nonlinearity","authors":"L. Marino, A. Cicirello","doi":"10.1017/dce.2023.12","DOIUrl":"https://doi.org/10.1017/dce.2023.12","url":null,"abstract":"Abstract An approach for the identification of discontinuous and nonsmooth nonlinear forces, as those generated by frictional contacts, in mechanical systems that can be approximated by a single-degree-of-freedom model is presented. To handle the sharp variations and multiple motion regimes introduced by these nonlinearities in the dynamic response, the partially known physics-based model and noisy measurements of the system’s response to a known input force are combined within a switching Gaussian process latent force model (GPLFM). In this grey-box framework, multiple Gaussian processes are used to model the unknown nonlinear force across different motion regimes and a resetting model enables the generation of discontinuities. The states of the system, nonlinear force, and regime transitions are inferred by using filtering and smoothing techniques for switching linear dynamical systems. The proposed switching GPLFM is applied to a simulated dry friction oscillator and an experimental setup consisting of a single-storey frame with a brass-to-steel contact. Excellent results are obtained in terms of the identified nonlinear and discontinuous friction force for varying: (i) normal load amplitudes in the contact; (ii) measurement noise levels, and (iii) number of samples in the datasets. Moreover, the identified states, friction force, and sequence of motion regimes are used for evaluating: (1) uncertain system parameters; (2) the friction force–velocity relationship, and (3) the static friction force. The correct identification of the discontinuous nonlinear force and the quantification of any remaining uncertainty in its prediction enable the implementation of an accurate forward model able to predict the system’s response to different input forces.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47282521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}