Xiaolong He, Qizhi He, Jiun-Shyan Chen, U. Sinha, S. Sinha
Abstract As characterization and modeling of complex materials by phenomenological models remains challenging, data-driven computing that performs physical simulations directly from material data has attracted considerable attention. Data-driven computing is a general computational mechanics framework that consists of a physical solver and a material solver, based on which data-driven solutions are obtained through minimization procedures. This work develops a new material solver built upon the local convexity-preserving reconstruction scheme by He and Chen (2020) A physics-constrained data-driven approach based on locally convex reconstruction for noisy database. Computer Methods in Applied Mechanics and Engineering 363, 112791 to model anisotropic nonlinear elastic solids. In this approach, a two-level local data search algorithm for material anisotropy is introduced into the material solver in online data-driven computing. A material anisotropic state characterizing the underlying material orientation is used for the manifold learning projection in the material solver. The performance of the proposed data-driven framework with noiseless and noisy material data is validated by solving two benchmark problems with synthetic material data. The data-driven solutions are compared with the constitutive model-based reference solutions to demonstrate the effectiveness of the proposed methods.
{"title":"Physics-constrained local convexity data-driven modeling of anisotropic nonlinear elastic solids","authors":"Xiaolong He, Qizhi He, Jiun-Shyan Chen, U. Sinha, S. Sinha","doi":"10.1017/dce.2020.20","DOIUrl":"https://doi.org/10.1017/dce.2020.20","url":null,"abstract":"Abstract As characterization and modeling of complex materials by phenomenological models remains challenging, data-driven computing that performs physical simulations directly from material data has attracted considerable attention. Data-driven computing is a general computational mechanics framework that consists of a physical solver and a material solver, based on which data-driven solutions are obtained through minimization procedures. This work develops a new material solver built upon the local convexity-preserving reconstruction scheme by He and Chen (2020) A physics-constrained data-driven approach based on locally convex reconstruction for noisy database. Computer Methods in Applied Mechanics and Engineering 363, 112791 to model anisotropic nonlinear elastic solids. In this approach, a two-level local data search algorithm for material anisotropy is introduced into the material solver in online data-driven computing. A material anisotropic state characterizing the underlying material orientation is used for the manifold learning projection in the material solver. The performance of the proposed data-driven framework with noiseless and noisy material data is validated by solving two benchmark problems with synthetic material data. The data-driven solutions are compared with the constitutive model-based reference solutions to demonstrate the effectiveness of the proposed methods.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.20","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46481855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Anomaly detection in asset condition data is critical for reliable industrial asset operations. But statistical anomaly classifiers require certain amount of normal operations training data before acceptable accuracy can be achieved. The necessary training data are often not available in the early periods of assets operations. This problem is addressed in this paper using a hierarchical model for the asset fleet that systematically identifies similar assets, and enables collaborative learning within the clusters of similar assets. The general behavior of the similar assets are represented using higher level models, from which the parameters are sampled describing the individual asset operations. Hierarchical models enable the individuals from a population, comprising of statistically coherent subpopulations, to collaboratively learn from one another. Results obtained with the hierarchical model show a marked improvement in anomaly detection for assets having low amount of data, compared to independent modeling or having a model common to the entire fleet.
{"title":"Anomaly detection in a fleet of industrial assets with hierarchical statistical modeling","authors":"M. Dhada, M. Girolami, A. Parlikad","doi":"10.1017/dce.2020.19","DOIUrl":"https://doi.org/10.1017/dce.2020.19","url":null,"abstract":"Abstract Anomaly detection in asset condition data is critical for reliable industrial asset operations. But statistical anomaly classifiers require certain amount of normal operations training data before acceptable accuracy can be achieved. The necessary training data are often not available in the early periods of assets operations. This problem is addressed in this paper using a hierarchical model for the asset fleet that systematically identifies similar assets, and enables collaborative learning within the clusters of similar assets. The general behavior of the similar assets are represented using higher level models, from which the parameters are sampled describing the individual asset operations. Hierarchical models enable the individuals from a population, comprising of statistically coherent subpopulations, to collaboratively learn from one another. Results obtained with the hierarchical model show a marked improvement in anomaly detection for assets having low amount of data, compared to independent modeling or having a model common to the entire fleet.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.19","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42491903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Jans-Singh, K. Leeming, R. Choudhary, M. Girolami
Abstract This paper presents the development process of a digital twin of a unique hydroponic underground farm in London, Growing Underground (GU). Growing 12x more per unit area than traditional greenhouse farming in the UK, the farm also consumes 4x more energy per unit area. Key to the ongoing operational success of this farm and similar enterprises is finding ways to minimize the energy use while maximizing crop growth by maintaining optimal growing conditions. As such, it belongs to the class of Controlled Environment Agriculture, where indoor environments are carefully controlled to maximize crop growth by using artificial lighting and smart heating, ventilation, and air conditioning systems. We tracked changing environmental conditions and crop growth across 89 different variables, through a wireless sensor network and unstructured manual records, and combined all the data into a database. We show how the digital twin can provide enhanced outputs for a bespoke site like GU, by creating inferred data fields, and show the limitations of data collection in a commercial environment. For example, we find that lighting is the dominant environmental factor for temperature and thus crop growth in this farm, and that the effects of external temperature and ventilation are confounded. We combine information learned from historical data interpretation to create a bespoke temperature forecasting model (root mean squared error < 1.3°C), using a dynamic linear model with a data-centric lighting component. Finally, we present how the forecasting model can be integrated into the digital twin to provide feedback to the farmers for decision-making assistance.
{"title":"Digital twin of an urban-integrated hydroponic farm","authors":"M. Jans-Singh, K. Leeming, R. Choudhary, M. Girolami","doi":"10.1017/dce.2020.21","DOIUrl":"https://doi.org/10.1017/dce.2020.21","url":null,"abstract":"Abstract This paper presents the development process of a digital twin of a unique hydroponic underground farm in London, Growing Underground (GU). Growing 12x more per unit area than traditional greenhouse farming in the UK, the farm also consumes 4x more energy per unit area. Key to the ongoing operational success of this farm and similar enterprises is finding ways to minimize the energy use while maximizing crop growth by maintaining optimal growing conditions. As such, it belongs to the class of Controlled Environment Agriculture, where indoor environments are carefully controlled to maximize crop growth by using artificial lighting and smart heating, ventilation, and air conditioning systems. We tracked changing environmental conditions and crop growth across 89 different variables, through a wireless sensor network and unstructured manual records, and combined all the data into a database. We show how the digital twin can provide enhanced outputs for a bespoke site like GU, by creating inferred data fields, and show the limitations of data collection in a commercial environment. For example, we find that lighting is the dominant environmental factor for temperature and thus crop growth in this farm, and that the effects of external temperature and ventilation are confounded. We combine information learned from historical data interpretation to create a bespoke temperature forecasting model (root mean squared error < 1.3°C), using a dynamic linear model with a data-centric lighting component. Finally, we present how the forecasting model can be integrated into the digital twin to provide feedback to the farmers for decision-making assistance.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.21","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46856540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meshkat Botshekan, J. Roxon, Athikom Wanichkul, Theemathas Chirananthavat, J. Chamoun, Malik Ziq, Bader Anini, Naseem A. Daher, Abdalkarim Awad, Wasel T. Ghanem, M. Tootkaboni, A. Louhghalam, F. Ulm
Abstract We propose, calibrate, and validate a crowdsourced approach for estimating power spectral density (PSD) of road roughness based on an inverse analysis of vertical acceleration measured by a smartphone mounted in an unknown position in a vehicle. Built upon random vibration analysis of a half-car mechanistic model of roughness-induced pavement–vehicle interaction, the inverse analysis employs an L2 norm regularization to estimate ride quality metrics, such as the widely used International Roughness Index, from the acceleration PSD. Evoking the fluctuation–dissipation theorem of statistical physics, the inverse framework estimates the half-car dynamic vehicle properties and related excess fuel consumption. The method is validated against (a) laser-measured road roughness data for both inner city and highway road conditions and (b) road roughness data for the state of California. We also show that the phone position in the vehicle only marginally affects road roughness predictions, an important condition for crowdsourced capabilities of the proposed approach.
{"title":"Roughness-induced vehicle energy dissipation from crowdsourced smartphone measurements through random vibration theory","authors":"Meshkat Botshekan, J. Roxon, Athikom Wanichkul, Theemathas Chirananthavat, J. Chamoun, Malik Ziq, Bader Anini, Naseem A. Daher, Abdalkarim Awad, Wasel T. Ghanem, M. Tootkaboni, A. Louhghalam, F. Ulm","doi":"10.1017/dce.2020.17","DOIUrl":"https://doi.org/10.1017/dce.2020.17","url":null,"abstract":"Abstract We propose, calibrate, and validate a crowdsourced approach for estimating power spectral density (PSD) of road roughness based on an inverse analysis of vertical acceleration measured by a smartphone mounted in an unknown position in a vehicle. Built upon random vibration analysis of a half-car mechanistic model of roughness-induced pavement–vehicle interaction, the inverse analysis employs an L2 norm regularization to estimate ride quality metrics, such as the widely used International Roughness Index, from the acceleration PSD. Evoking the fluctuation–dissipation theorem of statistical physics, the inverse framework estimates the half-car dynamic vehicle properties and related excess fuel consumption. The method is validated against (a) laser-measured road roughness data for both inner city and highway road conditions and (b) road roughness data for the state of California. We also show that the phone position in the vehicle only marginally affects road roughness predictions, an important condition for crowdsourced capabilities of the proposed approach.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.17","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44939170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract For research in the fields of engineering asset management (EAM) and system health, relevant data resides in the information systems of the asset owners, typically industrial corporations or government bodies. For academics to access EAM data sets for research purposes can be a difficult and time-consuming task. To facilitate a more consistent approach toward releasing asset-related data, we have developed a data risk assessment tool (DRAT). This tool evaluates and suggests controls to manage, risks associated with the release of EAM datasets to academic entities for research purposes. Factors considered in developing the tool include issues such as where accountability for approval sits in organizations, what affects an individual manager’s willingness to approve release, and how trust between universities and industry can be established and damaged. This paper describes the design of the DRAT tool and demonstrates its use on case studies provided by EAM owners for past research projects. The DRAT tool is currently being used to manage the data release process in a government-industry-university research partnership.
{"title":"DRAT: Data risk assessment tool for university–industry collaborations","authors":"J. Sikorska, S. Bradley, M. Hodkiewicz, R. Fraser","doi":"10.1017/dce.2020.13","DOIUrl":"https://doi.org/10.1017/dce.2020.13","url":null,"abstract":"Abstract For research in the fields of engineering asset management (EAM) and system health, relevant data resides in the information systems of the asset owners, typically industrial corporations or government bodies. For academics to access EAM data sets for research purposes can be a difficult and time-consuming task. To facilitate a more consistent approach toward releasing asset-related data, we have developed a data risk assessment tool (DRAT). This tool evaluates and suggests controls to manage, risks associated with the release of EAM datasets to academic entities for research purposes. Factors considered in developing the tool include issues such as where accountability for approval sits in organizations, what affects an individual manager’s willingness to approve release, and how trust between universities and industry can be established and damaged. This paper describes the design of the DRAT tool and demonstrates its use on case studies provided by EAM owners for past research projects. The DRAT tool is currently being used to manage the data release process in a government-industry-university research partnership.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.13","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47069399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this paper, we apply flexible data-driven analysis methods on large-scale mass transit data to identify areas for improvement in the engineering and operation of urban rail systems. Specifically, we use data from automated fare collection (AFC) and automated vehicle location (AVL) systems to obtain a more precise characterisation of the drivers of journey time variance on the London Underground, and thus an improved understanding of delay. Total journey times are decomposed via a probabilistic assignment algorithm, and semiparametric regression is undertaken to disentangle the effects of passenger-specific travel characteristics from network-related factors. For total journey times, we find that network characteristics, primarily train speeds and headways, represent the majority of journey time variance. However, within the typically twice as onerous access and egress time components, passenger-level heterogeneity is more influential. On average, we find that intra-passenger heterogeneity represents 6% and 19% of variance in access and egress times, respectively, and that inter-passenger effects have a similar or greater degree of influence than static network characteristics. The analysis shows that while network-specific characteristics are the primary drivers of journey time variance in absolute terms, a nontrivial proportion of passenger-perceived variance would be influenced by passenger-specific characteristics. The findings have potential applications related to improving the understanding of passenger movements within stations, for example, the analysis can be used to assess the relative way-finding complexity of stations, which can in turn guide transit operators in the targeting of potential interventions.
{"title":"Quantifying the effects of passenger-level heterogeneity on transit journey times","authors":"Ramandeep Singh, D. Graham, R. Anderson","doi":"10.1017/dce.2020.15","DOIUrl":"https://doi.org/10.1017/dce.2020.15","url":null,"abstract":"Abstract In this paper, we apply flexible data-driven analysis methods on large-scale mass transit data to identify areas for improvement in the engineering and operation of urban rail systems. Specifically, we use data from automated fare collection (AFC) and automated vehicle location (AVL) systems to obtain a more precise characterisation of the drivers of journey time variance on the London Underground, and thus an improved understanding of delay. Total journey times are decomposed via a probabilistic assignment algorithm, and semiparametric regression is undertaken to disentangle the effects of passenger-specific travel characteristics from network-related factors. For total journey times, we find that network characteristics, primarily train speeds and headways, represent the majority of journey time variance. However, within the typically twice as onerous access and egress time components, passenger-level heterogeneity is more influential. On average, we find that intra-passenger heterogeneity represents 6% and 19% of variance in access and egress times, respectively, and that inter-passenger effects have a similar or greater degree of influence than static network characteristics. The analysis shows that while network-specific characteristics are the primary drivers of journey time variance in absolute terms, a nontrivial proportion of passenger-perceived variance would be influenced by passenger-specific characteristics. The findings have potential applications related to improving the understanding of passenger movements within stations, for example, the analysis can be used to assess the relative way-finding complexity of stations, which can in turn guide transit operators in the targeting of potential interventions.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.15","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43452878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Seshadri, A. Duncan, G. Thorne, G. Parks, Raul Vazquez Diaz, M. Girolami
Aeroengine performance is determined by temperature and pressure profiles along various axial stations within an engine. Given limited sensor measurements, we require a statistically principled approach for inferring these profiles. In this paper we detail a Bayesian methodology for interpolating the spatial temperature or pressure profile at axial stations within an aeroengine. The profile at any given axial station is represented as a spatial Gaussian random field on an annulus, with circumferential variations modelled using a Fourier basis and radial variations modelled with a squared exponential kernel. This Gaussian random field is extended to ingest data from multiple axial measurement planes, with the aim of transferring information across the planes. To facilitate this type of transfer learning, a novel planar covariance kernel is proposed. In the scenario where frequencies comprising the temperature field are unknown, we utilise a sparsity-promoting prior on the frequencies to encourage sparse representations. This easily extends to cases with multiple engine planes whilst accommodating frequency variations between the planes. The main quantity of interest, the spatial area average is readily obtained in closed form. We term this the Bayesian area average and demonstrate how this metric offers far more representative averages than a sector area average---a widely used area averaging approach. Furthermore, the Bayesian area average naturally decomposes the posterior uncertainty into terms characterising insufficient sampling and sensor measurement error respectively. This too provides a significant improvement over prior standard deviation based uncertainty breakdowns.
{"title":"Bayesian assessments of aeroengine performance with transfer learning","authors":"P. Seshadri, A. Duncan, G. Thorne, G. Parks, Raul Vazquez Diaz, M. Girolami","doi":"10.1017/dce.2022.29","DOIUrl":"https://doi.org/10.1017/dce.2022.29","url":null,"abstract":"\u0000 Aeroengine performance is determined by temperature and pressure profiles along various axial stations within an engine. Given limited sensor measurements, we require a statistically principled approach for inferring these profiles. In this paper we detail a Bayesian methodology for interpolating the spatial temperature or pressure profile at axial stations within an aeroengine. The profile at any given axial station is represented as a spatial Gaussian random field on an annulus, with circumferential variations modelled using a Fourier basis and radial variations modelled with a squared exponential kernel. This Gaussian random field is extended to ingest data from multiple axial measurement planes, with the aim of transferring information across the planes. To facilitate this type of transfer learning, a novel planar covariance kernel is proposed. In the scenario where frequencies comprising the temperature field are unknown, we utilise a sparsity-promoting prior on the frequencies to encourage sparse representations. This easily extends to cases with multiple engine planes whilst accommodating frequency variations between the planes. The main quantity of interest, the spatial area average is readily obtained in closed form. We term this the Bayesian area average and demonstrate how this metric offers far more representative averages than a sector area average---a widely used area averaging approach. Furthermore, the Bayesian area average naturally decomposes the posterior uncertainty into terms characterising insufficient sampling and sensor measurement error respectively. This too provides a significant improvement over prior standard deviation based uncertainty breakdowns.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48754411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Sacks, I. Brilakis, Ergo Pikas, Haiyan Xie, M. Girolami
Abstract The concept of a “digital twin” as a model for data-driven management and control of physical systems has emerged over the past decade in the domains of manufacturing, production, and operations. In the context of buildings and civil infrastructure, the notion of a digital twin remains ill-defined, with little or no consensus among researchers and practitioners of the ways in which digital twin processes and data-centric technologies can support design and construction. This paper builds on existing concepts of Building Information Modeling (BIM), lean project production systems, automated data acquisition from construction sites and supply chains, and artificial intelligence to formulate a mode of construction that applies digital twin information systems to achieve closed loop control systems. It contributes a set of four core information and control concepts for digital twin construction (DTC), which define the dimensions of the conceptual space for the information used in DTC workflows. Working from the core concepts, we propose a DTC information system workflow—including information stores, information processing functions, and monitoring technologies—according to three concentric control workflow cycles. DTC should be viewed as a comprehensive mode of construction that prioritizes closing the control loops rather than an extension of BIM tools integrated with sensing and monitoring technologies.
{"title":"Construction with digital twin information systems","authors":"R. Sacks, I. Brilakis, Ergo Pikas, Haiyan Xie, M. Girolami","doi":"10.1017/dce.2020.16","DOIUrl":"https://doi.org/10.1017/dce.2020.16","url":null,"abstract":"Abstract The concept of a “digital twin” as a model for data-driven management and control of physical systems has emerged over the past decade in the domains of manufacturing, production, and operations. In the context of buildings and civil infrastructure, the notion of a digital twin remains ill-defined, with little or no consensus among researchers and practitioners of the ways in which digital twin processes and data-centric technologies can support design and construction. This paper builds on existing concepts of Building Information Modeling (BIM), lean project production systems, automated data acquisition from construction sites and supply chains, and artificial intelligence to formulate a mode of construction that applies digital twin information systems to achieve closed loop control systems. It contributes a set of four core information and control concepts for digital twin construction (DTC), which define the dimensions of the conceptual space for the information used in DTC workflows. Working from the core concepts, we propose a DTC information system workflow—including information stores, information processing functions, and monitoring technologies—according to three concentric control workflow cycles. DTC should be viewed as a comprehensive mode of construction that prioritizes closing the control loops rather than an extension of BIM tools integrated with sensing and monitoring technologies.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.16","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48958259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Ward, R. Choudhary, A. Gregory, M. Jans-Singh, M. Girolami
Abstract Assimilation of continuously streamed monitored data is an essential component of a digital twin; the assimilated data are used to ensure the digital twin represents the monitored system as accurately as possible. One way this is achieved is by calibration of simulation models, whether data-derived or physics-based, or a combination of both. Traditional manual calibration is not possible in this context; hence, new methods are required for continuous calibration. In this paper, a particle filter methodology for continuous calibration of the physics-based model element of a digital twin is presented and applied to an example of an underground farm. The methodology is applied to a synthetic problem with known calibration parameter values prior to being used in conjunction with monitored data. The proposed methodology is compared against static and sequential Bayesian calibration approaches and compares favourably in terms of determination of the distribution of parameter values and analysis run times, both essential requirements. The methodology is shown to be potentially useful as a means to ensure continuing model fidelity.
{"title":"Continuous calibration of a digital twin: Comparison of particle filter and Bayesian calibration approaches","authors":"R. Ward, R. Choudhary, A. Gregory, M. Jans-Singh, M. Girolami","doi":"10.1017/dce.2021.12","DOIUrl":"https://doi.org/10.1017/dce.2021.12","url":null,"abstract":"Abstract Assimilation of continuously streamed monitored data is an essential component of a digital twin; the assimilated data are used to ensure the digital twin represents the monitored system as accurately as possible. One way this is achieved is by calibration of simulation models, whether data-derived or physics-based, or a combination of both. Traditional manual calibration is not possible in this context; hence, new methods are required for continuous calibration. In this paper, a particle filter methodology for continuous calibration of the physics-based model element of a digital twin is presented and applied to an example of an underground farm. The methodology is applied to a synthetic problem with known calibration parameter values prior to being used in conjunction with monitored data. The proposed methodology is compared against static and sequential Bayesian calibration approaches and compares favourably in terms of determination of the distribution of parameter values and analysis run times, both essential requirements. The methodology is shown to be potentially useful as a means to ensure continuing model fidelity.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49603445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Traffic congestion across the world has reached chronic levels. Despite many technological disruptions, one of the most fundamental and widely used functions within traffic modeling, the volume–delay function has seen little in the way of change since it was developed in the 1960s. Traditionally macroscopic methods have been employed to relate traffic volume to vehicular journey time. The general nature of these functions enables their ease of use and gives widespread applicability. However, they lack the ability to consider individual road characteristics (i.e., geometry, presence of traffic furniture, road quality, and surrounding environment). This research investigates the feasibility to reconstruct the model using two different data sources, namely the traffic speed from Google Maps’ Directions Application Programming Interface (API) and traffic volume data from automated traffic counters (ATC). Google’s traffic speed data are crowd-sourced from the smartphone Global Positioning System (GPS) of road users, able to reflect real-time, context-specific traffic condition of a road. On the other hand, the ATCs enable the harvesting of the vehicle volume data over equally fine temporal resolutions (hourly or less). By combining them for different road types in London, new context-specific volume–delay functions can be generated. This method shows promise in selected locations with the generation of robust functions. In other locations, it highlights the need to better understand other influencing factors, such as the presence of on-road parking or weather events.
{"title":"Context-specific volume–delay curves by combining crowd-sourced traffic data with automated traffic counters: A case study for London","authors":"Gerard Casey, Bingyu Zhao, Krishna Kumar, K. Soga","doi":"10.1017/dce.2020.18","DOIUrl":"https://doi.org/10.1017/dce.2020.18","url":null,"abstract":"Abstract Traffic congestion across the world has reached chronic levels. Despite many technological disruptions, one of the most fundamental and widely used functions within traffic modeling, the volume–delay function has seen little in the way of change since it was developed in the 1960s. Traditionally macroscopic methods have been employed to relate traffic volume to vehicular journey time. The general nature of these functions enables their ease of use and gives widespread applicability. However, they lack the ability to consider individual road characteristics (i.e., geometry, presence of traffic furniture, road quality, and surrounding environment). This research investigates the feasibility to reconstruct the model using two different data sources, namely the traffic speed from Google Maps’ Directions Application Programming Interface (API) and traffic volume data from automated traffic counters (ATC). Google’s traffic speed data are crowd-sourced from the smartphone Global Positioning System (GPS) of road users, able to reflect real-time, context-specific traffic condition of a road. On the other hand, the ATCs enable the harvesting of the vehicle volume data over equally fine temporal resolutions (hourly or less). By combining them for different road types in London, new context-specific volume–delay functions can be generated. This method shows promise in selected locations with the generation of robust functions. In other locations, it highlights the need to better understand other influencing factors, such as the presence of on-road parking or weather events.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.18","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42685706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}