A. Batabyal, Sugrim Sagar, Jian Zhang, T. Dube, Xuehui Yang, Jing Zhang
A persistent problem in the selective laser sintering process is to maintain the quality of additively manufactured parts, which can be attributed to the various sources of uncertainty. In this work, a two-particle phase-field microstructure model has been analyzed using a Gaussian process-based model. The sources of uncertainty as the two input parameters were surface diffusivity and interparticle distance. The response quantity of interest (QOI) was selected as the size of the neck region that develops between the two particles. Two different cases with equal and unequal-sized particles were studied. It was observed that the neck size increased with increasing surface diffusivity and decreased with increasing interparticle distance irrespective of particle size. Sensitivity analysis found that the interparticle distance has more influence on variation in neck size than that of surface diffusivity. The machine learning algorithm Gaussian process regression was used to create the surrogate model of the QOI. Bayesian optimization method was used to find optimal values of the input parameters. For equal-sized particles, optimization using Probability of Improvement provided optimal values of surface diffusivity and interparticle distance as 23.8268 and 40.0001, respectively. The Expected Improvement as an acquisition function gave optimal values of 23.9874 and 40.7428, respectively. For unequal-sized particles, optimal design values from Probability of Improvement were 23.9700 and 33.3005, respectively, while those from Expected Improvement were 23.9893 and 33.9627, respectively. The optimization results from the two different acquisition functions seemed to be in good agreement.
{"title":"Gaussian Process-Based Model to Optimize Additively Manufactured Powder Microstructures From Phase Field Modeling","authors":"A. Batabyal, Sugrim Sagar, Jian Zhang, T. Dube, Xuehui Yang, Jing Zhang","doi":"10.1115/1.4051745","DOIUrl":"https://doi.org/10.1115/1.4051745","url":null,"abstract":"\u0000 A persistent problem in the selective laser sintering process is to maintain the quality of additively manufactured parts, which can be attributed to the various sources of uncertainty. In this work, a two-particle phase-field microstructure model has been analyzed using a Gaussian process-based model. The sources of uncertainty as the two input parameters were surface diffusivity and interparticle distance. The response quantity of interest (QOI) was selected as the size of the neck region that develops between the two particles. Two different cases with equal and unequal-sized particles were studied. It was observed that the neck size increased with increasing surface diffusivity and decreased with increasing interparticle distance irrespective of particle size. Sensitivity analysis found that the interparticle distance has more influence on variation in neck size than that of surface diffusivity. The machine learning algorithm Gaussian process regression was used to create the surrogate model of the QOI. Bayesian optimization method was used to find optimal values of the input parameters. For equal-sized particles, optimization using Probability of Improvement provided optimal values of surface diffusivity and interparticle distance as 23.8268 and 40.0001, respectively. The Expected Improvement as an acquisition function gave optimal values of 23.9874 and 40.7428, respectively. For unequal-sized particles, optimal design values from Probability of Improvement were 23.9700 and 33.3005, respectively, while those from Expected Improvement were 23.9893 and 33.9627, respectively. The optimization results from the two different acquisition functions seemed to be in good agreement.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"16 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90764005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The probabilistic safety analysis evaluates system reliability and failure probability by using statistics and probability theory but it cannot estimate the system uncertainties due to variabilities of system state probabilities. The article firstly resumes how the information entropy expresses the probabilistic uncertainties due to unevenness of probability distributions of system states. Next it argues that the conditional entropy with respect to system operational and failure states appropriately describes system redundancy and robustness, respectively. Finally the article concludes that the joint probabilistic uncertainties of reliability, redundancy and robustness defines the integral system safety. The concept of integral system safety allows more comprehensive definitions of favorable system functional properties, configuration evaluation, optimization and decision making in engineering.
{"title":"Uncertainty of Integral System Safety in Enginering","authors":"K. Ziha","doi":"10.1115/1.4051939","DOIUrl":"https://doi.org/10.1115/1.4051939","url":null,"abstract":"\u0000 The probabilistic safety analysis evaluates system reliability and failure probability by using statistics and probability theory but it cannot estimate the system uncertainties due to variabilities of system state probabilities. The article firstly resumes how the information entropy expresses the probabilistic uncertainties due to unevenness of probability distributions of system states. Next it argues that the conditional entropy with respect to system operational and failure states appropriately describes system redundancy and robustness, respectively. Finally the article concludes that the joint probabilistic uncertainties of reliability, redundancy and robustness defines the integral system safety. The concept of integral system safety allows more comprehensive definitions of favorable system functional properties, configuration evaluation, optimization and decision making in engineering.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"53 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89744670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As autonomous vehicle (AV) intelligence for controllability continues to develop, involving increasingly complex and interconnected systems, the maturity level of AV technology increasingly depends on the systems reliability level, also considering the interactions among them. Hazard analysis is typically used to identify potential system risks and avoid loss of AV system functionality. Conventional hazard analysis methods are commonly used for traditional standalone systems. New hazard analysis methods have been developed that may be more suitable for AV system-of-systems complexity. However, a comprehensive comparison of hazard analysis methods for AV systems is lacking. In this study, the traditional hazard analysis methods, hazard and operability (HAZOP) and failure mode and effects analysis (FMEA), as well as the most recent methods, like functional resonance analysis method (FRAM; Hollnagel, 2004, 2012) and system-theoretic process analysis (STPA; Leveson, 2011), are considered for implementation in the automatic emergency braking system. This system is designed to avoid collisions by utilizing the surrounding sensors to detect objects on the road, warning drivers with alerts about any collision risk, and actuating automatic partial/full braking through calculated adaptive braking deceleration. The objective of this work is to evaluate the methods in terms of their applicability to AV technologies. The advantages of HAZOP, FMEA, FRAM, and STPA, as well as the possibility of combining them to achieve systematic risk identification in practice, are discussed.
{"title":"Comparison of the HAZOP, FMEA, FRAM and STPA Methods for the Hazard Analysis of Automatic Emergency Brake Systems","authors":"Liangliang Sun, Yanfu Li, E. Zio","doi":"10.1115/1.4051940","DOIUrl":"https://doi.org/10.1115/1.4051940","url":null,"abstract":"\u0000 As autonomous vehicle (AV) intelligence for controllability continues to develop, involving increasingly complex and interconnected systems, the maturity level of AV technology increasingly depends on the systems reliability level, also considering the interactions among them. Hazard analysis is typically used to identify potential system risks and avoid loss of AV system functionality. Conventional hazard analysis methods are commonly used for traditional standalone systems. New hazard analysis methods have been developed that may be more suitable for AV system-of-systems complexity. However, a comprehensive comparison of hazard analysis methods for AV systems is lacking. In this study, the traditional hazard analysis methods, hazard and operability (HAZOP) and failure mode and effects analysis (FMEA), as well as the most recent methods, like functional resonance analysis method (FRAM; Hollnagel, 2004, 2012) and system-theoretic process analysis (STPA; Leveson, 2011), are considered for implementation in the automatic emergency braking system. This system is designed to avoid collisions by utilizing the surrounding sensors to detect objects on the road, warning drivers with alerts about any collision risk, and actuating automatic partial/full braking through calculated adaptive braking deceleration. The objective of this work is to evaluate the methods in terms of their applicability to AV technologies. The advantages of HAZOP, FMEA, FRAM, and STPA, as well as the possibility of combining them to achieve systematic risk identification in practice, are discussed.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"5 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86879517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antriksh Sharma, Jie Chen, Evan Diewald, A. Imanian, J. Beuth, Yongming Liu
Additive manufacturing (AM) has been extensively investigated in recent years to explore its application in a wide range of engineering functionalities, such as mechanical, acoustic, thermal, and electrical properties. A data-driven approach is proposed to investigate the influence of major fabrication parameters in the laser-based additively manufactured Ti–6Al–4V. Two separate laser-based powder bed fusion techniques, i.e., selective laser melting (SLM) and direct metal laser sintering (DMLS), have been investigated and several data regarding the tensile properties of Ti–6Al–4V alloy with their corresponding fabrication parameters are collected from open literature. Statistical data analysis is performed for four fabrication parameters (scanning speed, laser power, hatch spacing, and powder layer thickness) and three postfabrication parameters (heating temperature, heating time, and hot isostatically pressed or not) which are major influencing factors and have been investigated by several researchers to identify their behavior on the static mechanical properties (i.e., yielding strength, ultimate tensile strength, and elongation). To identify the behavior of the relationship between the input and output parameters, both linear regression analysis and artificial neural network (ANN) models are developed using 53 and 100 datasets for SLM and DMLS processes, respectively. The linear regression model resulted in an average R squared value of 0.351 and 0.507 compared to 0.908 and 0.833 in the case of nonlinear ANN modeling for SLM and DMLS based modeling, respectively. Both local and global sensitivity analyses are carried out to identify the important factors for future optimal design. Based on the current study, local sensitivity analysis (SA) suggests that SLM is most sensitive to laser power, scanning speed, and heat treatment temperature while DMLS is most sensitive to heat treatment temperature, hatch spacing, and laser power. In the case of DMLS fabricated Ti–6Al–4V alloy, laser power, and scan speed are found to be the most impactful input parameters for tensile properties of the alloy while heating time turned out to be the least affecting parameter. The global sensitivity analysis results can be used to tailor the alloy's static properties as per the requirement while results from local sensitivity analysis could be useful to optimize the already tailored design properties. Sobol's global sensitivity analysis implicates laser power, heating temperature, and hatch spacing to be the most influential parameters for alloy strength while powder layer thickness followed by scanning speed to be the prominent parameters for elongation for SLM fabricated Ti–6Al–4V alloy. Future work would still be needed to eradicate some of the limitations of this study related to limited dataset availability.
{"title":"Data-Driven Sensitivity Analysis for Static Mechanical Properties of Additively Manufactured Ti–6Al–4V","authors":"Antriksh Sharma, Jie Chen, Evan Diewald, A. Imanian, J. Beuth, Yongming Liu","doi":"10.1115/1.4051799","DOIUrl":"https://doi.org/10.1115/1.4051799","url":null,"abstract":"\u0000 Additive manufacturing (AM) has been extensively investigated in recent years to explore its application in a wide range of engineering functionalities, such as mechanical, acoustic, thermal, and electrical properties. A data-driven approach is proposed to investigate the influence of major fabrication parameters in the laser-based additively manufactured Ti–6Al–4V. Two separate laser-based powder bed fusion techniques, i.e., selective laser melting (SLM) and direct metal laser sintering (DMLS), have been investigated and several data regarding the tensile properties of Ti–6Al–4V alloy with their corresponding fabrication parameters are collected from open literature. Statistical data analysis is performed for four fabrication parameters (scanning speed, laser power, hatch spacing, and powder layer thickness) and three postfabrication parameters (heating temperature, heating time, and hot isostatically pressed or not) which are major influencing factors and have been investigated by several researchers to identify their behavior on the static mechanical properties (i.e., yielding strength, ultimate tensile strength, and elongation). To identify the behavior of the relationship between the input and output parameters, both linear regression analysis and artificial neural network (ANN) models are developed using 53 and 100 datasets for SLM and DMLS processes, respectively. The linear regression model resulted in an average R squared value of 0.351 and 0.507 compared to 0.908 and 0.833 in the case of nonlinear ANN modeling for SLM and DMLS based modeling, respectively. Both local and global sensitivity analyses are carried out to identify the important factors for future optimal design. Based on the current study, local sensitivity analysis (SA) suggests that SLM is most sensitive to laser power, scanning speed, and heat treatment temperature while DMLS is most sensitive to heat treatment temperature, hatch spacing, and laser power. In the case of DMLS fabricated Ti–6Al–4V alloy, laser power, and scan speed are found to be the most impactful input parameters for tensile properties of the alloy while heating time turned out to be the least affecting parameter. The global sensitivity analysis results can be used to tailor the alloy's static properties as per the requirement while results from local sensitivity analysis could be useful to optimize the already tailored design properties. Sobol's global sensitivity analysis implicates laser power, heating temperature, and hatch spacing to be the most influential parameters for alloy strength while powder layer thickness followed by scanning speed to be the prominent parameters for elongation for SLM fabricated Ti–6Al–4V alloy. Future work would still be needed to eradicate some of the limitations of this study related to limited dataset availability.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"1 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76421909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern technical systems consist of heterogeneous components, including mechanical parts, hardware, and the extensive software part that allows the autonomous system operation. The heterogeneity and autonomy require appropriate models that can describe the mutual interaction of the components. UML and SysML are widely accepted candidates for system modeling and model-based analysis in early design phases, including the analysis of reliability properties. UML and SysML models are semi-formal. Thus, transformation methods to formal models are required. Recently, we introduced a stochastic Dual-graph Error Propagation Model (DEPM). This model captures control and data flow structures of a system and allows the computation of advanced risk metrics using probabilistic model checking techniques. This article presents a new automated transformation method of an annotated State Machine Diagram, extended with Activity Diagrams, to a hierarchical DEPM. This method will help reliability engineers to keep error propagation models up to date and ensure their consistency with the available system models. The capabilities and limitations of transformation algorithm is described in detail and demonstrated on a complete model-based error propagation analysis of an autonomous medical patient table.
{"title":"Automated Transformation of UML/SysML Behavioral Diagrams for Stochastic Error Propagation Analysis of Autonomous Systems","authors":"A. Morozov, Thomas Mutzke, K. Ding","doi":"10.1115/1.4051781","DOIUrl":"https://doi.org/10.1115/1.4051781","url":null,"abstract":"\u0000 Modern technical systems consist of heterogeneous components, including mechanical parts, hardware, and the extensive software part that allows the autonomous system operation. The heterogeneity and autonomy require appropriate models that can describe the mutual interaction of the components. UML and SysML are widely accepted candidates for system modeling and model-based analysis in early design phases, including the analysis of reliability properties. UML and SysML models are semi-formal. Thus, transformation methods to formal models are required. Recently, we introduced a stochastic Dual-graph Error Propagation Model (DEPM). This model captures control and data flow structures of a system and allows the computation of advanced risk metrics using probabilistic model checking techniques. This article presents a new automated transformation method of an annotated State Machine Diagram, extended with Activity Diagrams, to a hierarchical DEPM. This method will help reliability engineers to keep error propagation models up to date and ensure their consistency with the available system models. The capabilities and limitations of transformation algorithm is described in detail and demonstrated on a complete model-based error propagation analysis of an autonomous medical patient table.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"59 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81960246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silicon is one of the commonly used semiconductors for various industrial applications. Traditional silicon synthesis methods are often expensive and cannot meet the continuously growing demands for high-purity Si; electrodeposition is a promising and simple alternative. However, the electrodeposited products often possess nonuniform thicknesses due to various sources of uncertainty inherited from the fabrication process; to improve the quality of the coating products, it is crucial to better understand the influences of the sources of uncertainty. In this paper, uncertainty quantification (UQ) analysis is performed on the silicon electrodeposition process to evaluate the impacts of various experimental operation parameters on the thickness variation of the coated silicon layer and to find the optimal experimental conditions. To mitigate the high experimental and computational cost issues, a Gaussian process (GP) based surrogate model is constructed to conduct the UQ study with finite element (FE) simulation results as training data. It is found that the GP surrogate model can efficiently and accurately estimate the performance of the electrodeposition given certain experimental operation parameters. The results show that the electrodeposition process is sensitive to the geometric settings of the experiments, i.e., distance and area ratio between the counter and working electrodes; whereas other conditions, such as the potential of the counter electrode, temperature, and ion concentration in the electrolyte bath are less important. Furthermore, the optimal operating condition to deposit silicon is proposed to minimize the thickness variation of the coated silicon layer and to enhance the reliability of the electrodeposition experiment.
{"title":"Uncertainty Quantification Analysis on Silicon Electrodeposition Process Via Numerical Simulation Methods","authors":"Zhuoyuan Zheng, Pingfeng Wang","doi":"10.1115/1.4051700","DOIUrl":"https://doi.org/10.1115/1.4051700","url":null,"abstract":"\u0000 Silicon is one of the commonly used semiconductors for various industrial applications. Traditional silicon synthesis methods are often expensive and cannot meet the continuously growing demands for high-purity Si; electrodeposition is a promising and simple alternative. However, the electrodeposited products often possess nonuniform thicknesses due to various sources of uncertainty inherited from the fabrication process; to improve the quality of the coating products, it is crucial to better understand the influences of the sources of uncertainty. In this paper, uncertainty quantification (UQ) analysis is performed on the silicon electrodeposition process to evaluate the impacts of various experimental operation parameters on the thickness variation of the coated silicon layer and to find the optimal experimental conditions. To mitigate the high experimental and computational cost issues, a Gaussian process (GP) based surrogate model is constructed to conduct the UQ study with finite element (FE) simulation results as training data. It is found that the GP surrogate model can efficiently and accurately estimate the performance of the electrodeposition given certain experimental operation parameters. The results show that the electrodeposition process is sensitive to the geometric settings of the experiments, i.e., distance and area ratio between the counter and working electrodes; whereas other conditions, such as the potential of the counter electrode, temperature, and ion concentration in the electrolyte bath are less important. Furthermore, the optimal operating condition to deposit silicon is proposed to minimize the thickness variation of the coated silicon layer and to enhance the reliability of the electrodeposition experiment.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"16 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75039373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Based upon a novel control-based dynamic modeling framework, this paper proposes two new indicators, i.e., resilience by mitigation and resilience by recovery, for the resilience analysis of interdependent critical infrastructures (ICIs) under disruptions. The former is built from the protection activities before and during the mitigation phase of a disruptive event, and the latter is the result of the restoration efforts, which take place at the recovery phase. The total resilience of ICIs combines both of these two aspects by taking into account the preferences of the decision makers. We demonstrate the applicability of the proposed modeling framework and metrics in a case study involving ICIs made of a power grid and a gas distribution system. Owing to the new resilience indicators, the priorities of subsystems and links within ICIs at different phases can be ranked; therefore, different resilience strategies at different phases of disruptive events are compared. The results show that proposed metrics can be used by stakeholders of ICIs on improving the effectiveness of system protection measurements.
{"title":"Resilience Assessment and Importance Measure for Interdependent Critical Infrastructures","authors":"Xing Liu, Yiping Fang, E. Ferrario, E. Zio","doi":"10.1115/1.4051196","DOIUrl":"https://doi.org/10.1115/1.4051196","url":null,"abstract":"\u0000 Based upon a novel control-based dynamic modeling framework, this paper proposes two new indicators, i.e., resilience by mitigation and resilience by recovery, for the resilience analysis of interdependent critical infrastructures (ICIs) under disruptions. The former is built from the protection activities before and during the mitigation phase of a disruptive event, and the latter is the result of the restoration efforts, which take place at the recovery phase. The total resilience of ICIs combines both of these two aspects by taking into account the preferences of the decision makers. We demonstrate the applicability of the proposed modeling framework and metrics in a case study involving ICIs made of a power grid and a gas distribution system. Owing to the new resilience indicators, the priorities of subsystems and links within ICIs at different phases can be ranked; therefore, different resilience strategies at different phases of disruptive events are compared. The results show that proposed metrics can be used by stakeholders of ICIs on improving the effectiveness of system protection measurements.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"538 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80135971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing demand for clean renewable energy sources and the lack of suitable nearshore sites are moving the offshore wind industry toward developing larger wind turbines in deeper water locations further offshore. This is adding significant uncertainty to the geotechnical design of monopiles used as foundations for these systems. Soil testing becomes more challenging, rigid monopile behavior is less certain, and design methods are being applied outside the bounds of the datasets from which they were originally derived. This paper examines the potential impact of certain elements of geotechnical uncertainty on monotonic load–displacement behavior and design system natural frequency of an example monopile-supported offshore wind turbine (OWT). Geotechnical uncertainty is considered in terms of spatial variability in soil properties derived from cone penetration tests (CPT), parameter transformation uncertainty using the rigidity index, and design choice for subgrade reaction modeling. Results suggest that spatial variability in CPT properties exhibits limited impact on design load–displacement characteristics of monopiles as vertical spatial variability tends to be averaged out in the process to develop discrete soil reaction-lateral displacement (p-y) models. This highlights a potential issue whereby localized variations in soil properties may not be captured in certain models. Spatial variability in CPT data has a noticeable effect on predicted system frequency responses of OWTs employing a subgrade reaction model approach, and the influence of subgrade reaction model choice is significant. The purpose of this paper is to investigate the effect of uncertainty in soil data, model transformation, and design model choice on resulting structural behavior for a subset of available design approaches. It should be noted that significant further uncertainty exists and a wide variety of alternative models can be used by designers, so the results should be interpreted qualitatively.
{"title":"Impact of Geotechnical Uncertainty on the Preliminary Design of Monopiles Supporting Offshore Wind Turbines","authors":"C. Reale, J. Tott-Buswell, L. Prendergast","doi":"10.1115/1.4051418","DOIUrl":"https://doi.org/10.1115/1.4051418","url":null,"abstract":"\u0000 The growing demand for clean renewable energy sources and the lack of suitable nearshore sites are moving the offshore wind industry toward developing larger wind turbines in deeper water locations further offshore. This is adding significant uncertainty to the geotechnical design of monopiles used as foundations for these systems. Soil testing becomes more challenging, rigid monopile behavior is less certain, and design methods are being applied outside the bounds of the datasets from which they were originally derived. This paper examines the potential impact of certain elements of geotechnical uncertainty on monotonic load–displacement behavior and design system natural frequency of an example monopile-supported offshore wind turbine (OWT). Geotechnical uncertainty is considered in terms of spatial variability in soil properties derived from cone penetration tests (CPT), parameter transformation uncertainty using the rigidity index, and design choice for subgrade reaction modeling. Results suggest that spatial variability in CPT properties exhibits limited impact on design load–displacement characteristics of monopiles as vertical spatial variability tends to be averaged out in the process to develop discrete soil reaction-lateral displacement (p-y) models. This highlights a potential issue whereby localized variations in soil properties may not be captured in certain models. Spatial variability in CPT data has a noticeable effect on predicted system frequency responses of OWTs employing a subgrade reaction model approach, and the influence of subgrade reaction model choice is significant. The purpose of this paper is to investigate the effect of uncertainty in soil data, model transformation, and design model choice on resulting structural behavior for a subset of available design approaches. It should be noted that significant further uncertainty exists and a wide variety of alternative models can be used by designers, so the results should be interpreted qualitatively.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"31 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80877149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A discrete direct (DD) model calibration and uncertainty propagation approach is explained and demonstrated on a 4-parameter Johnson-Cook (J-C) strain-rate dependent material strength model for an aluminum alloy. The methodology's performance is characterized in many trials involving four random realizations of strain-rate dependent material-test data curves per trial, drawn from a large synthetic population. The J-C model is calibrated to particular combinations of the data curves to obtain calibration parameter sets which are then propagated to “Can Crush” structural model predictions to produce samples of predicted response variability. These are processed with appropriate sparse-sample uncertainty quantification (UQ) methods to estimate various statistics of response with an appropriate level of conservatism. This is tested on 16 output quantities (von Mises stresses and equivalent plastic strains) and it is shown that important statistics of the true variabilities of the 16 quantities are bounded with a high success rate that is reasonably predictable and controllable. The DD approach has several advantages over other calibration-UQ approaches like Bayesian inference for capturing and utilizing the information obtained from typically small numbers of replicate experiments in model calibration situations—especially when sparse replicate functional data are involved like force–displacement curves from material tests. The DD methodology is straightforward and efficient for calibration and propagation problems involving aleatory and epistemic uncertainties in calibration experiments, models, and procedures.
{"title":"Discrete-Direct Model Calibration and Uncertainty Propagation Method Confirmed on Multi-Parameter Plasticity Model Calibrated to Sparse Random Field Data","authors":"V. Romero, J. Winokur, G. Orient, J. Dempsey","doi":"10.1115/1.4050371","DOIUrl":"https://doi.org/10.1115/1.4050371","url":null,"abstract":"\u0000 A discrete direct (DD) model calibration and uncertainty propagation approach is explained and demonstrated on a 4-parameter Johnson-Cook (J-C) strain-rate dependent material strength model for an aluminum alloy. The methodology's performance is characterized in many trials involving four random realizations of strain-rate dependent material-test data curves per trial, drawn from a large synthetic population. The J-C model is calibrated to particular combinations of the data curves to obtain calibration parameter sets which are then propagated to “Can Crush” structural model predictions to produce samples of predicted response variability. These are processed with appropriate sparse-sample uncertainty quantification (UQ) methods to estimate various statistics of response with an appropriate level of conservatism. This is tested on 16 output quantities (von Mises stresses and equivalent plastic strains) and it is shown that important statistics of the true variabilities of the 16 quantities are bounded with a high success rate that is reasonably predictable and controllable. The DD approach has several advantages over other calibration-UQ approaches like Bayesian inference for capturing and utilizing the information obtained from typically small numbers of replicate experiments in model calibration situations—especially when sparse replicate functional data are involved like force–displacement curves from material tests. The DD methodology is straightforward and efficient for calibration and propagation problems involving aleatory and epistemic uncertainties in calibration experiments, models, and procedures.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"24 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73044620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of evidence theory and associated cumulative plausibility functions (CPFs), cumulative belief functions (CBFs), cumulative distribution functions (CDFs), complementary cumulative plausibility functions (CCPFs), complementary cumulative belief functions (CCBFs), and complementary cumulative distribution functions (CCDFs) in the analysis of loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems is introduced and illustrated. Article content includes cumulative and complementary cumulative belief, plausibility, and probability for (i) time at which LOAS occurs for a one WL/two SL system, (ii) time at which a two-link system fails, (iii) temperature at which a two-link system fails, and (iv) temperature at which LOAS occurs for a one WL/two SL system. The presented results can be generalized to systems with more than one WL and two SLs.
{"title":"Evidence Theory Representations for Properties Associated With Weak Link/Strong Link Systems, Part 2: Failure Time and Failure Temperature","authors":"J. C. Helton, D. Brooks, J. Darby","doi":"10.1115/1.4050584","DOIUrl":"https://doi.org/10.1115/1.4050584","url":null,"abstract":"\u0000 The use of evidence theory and associated cumulative plausibility functions (CPFs), cumulative belief functions (CBFs), cumulative distribution functions (CDFs), complementary cumulative plausibility functions (CCPFs), complementary cumulative belief functions (CCBFs), and complementary cumulative distribution functions (CCDFs) in the analysis of loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems is introduced and illustrated. Article content includes cumulative and complementary cumulative belief, plausibility, and probability for (i) time at which LOAS occurs for a one WL/two SL system, (ii) time at which a two-link system fails, (iii) temperature at which a two-link system fails, and (iv) temperature at which LOAS occurs for a one WL/two SL system. The presented results can be generalized to systems with more than one WL and two SLs.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"14 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90166608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}