The reverse engineering of a valid algebraic inequality often leads to a projection of a novel physical reality characterized by a distinct signature: the algebraic inequality itself. This paper uses reverse engineering of valid algebraic inequalities for generating new knowledge and substantially improving the reliability of common series-parallel systems. Our study emphasizes that in the case of series-parallel systems with interchangeable redundant components, the asymmetric arrangement of components always leads to higher system reliability than a symmetric arrangement. This finding remains valid, irrespective of the particular reliabilities characterizing the components. Next, the paper presents novel system reliability inequalities whose reverse engineering enabled significant enhancement of the reliability of series-parallel systems with asymmetric arrangements of redundant components, without knowledge of the individual component reliabilities. Lastly, the paper presents a new technique for validating complex algebraic inequalities associated with series-parallel systems. This technique relies on permutation of variable values and the method of segmentation.
{"title":"Enhancing the Reliability of Series-parallel Systems with Multiple Redundancies by Using System-reliability Inequalities","authors":"M. Todinov","doi":"10.1115/1.4062892","DOIUrl":"https://doi.org/10.1115/1.4062892","url":null,"abstract":"\u0000 The reverse engineering of a valid algebraic inequality often leads to a projection of a novel physical reality characterized by a distinct signature: the algebraic inequality itself. This paper uses reverse engineering of valid algebraic inequalities for generating new knowledge and substantially improving the reliability of common series-parallel systems. Our study emphasizes that in the case of series-parallel systems with interchangeable redundant components, the asymmetric arrangement of components always leads to higher system reliability than a symmetric arrangement. This finding remains valid, irrespective of the particular reliabilities characterizing the components. Next, the paper presents novel system reliability inequalities whose reverse engineering enabled significant enhancement of the reliability of series-parallel systems with asymmetric arrangements of redundant components, without knowledge of the individual component reliabilities. Lastly, the paper presents a new technique for validating complex algebraic inequalities associated with series-parallel systems. This technique relies on permutation of variable values and the method of segmentation.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"9 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76177508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Converting mechanical vibrations into electrical power with vibratory energy harvesters can ensure the portability, efficiency, and sustainability of electronic devices and batteries. Vibratory energy harvesters are typically modeled as nonlinear oscillators subject to random excitation, and their design requires a complete characterization of their probabilistic responses. However, simulation techniques such as Monte Carlo are computationally prohibitive when the accurate estimation of the response probability distribution is needed. Alternatively, approximate methods such as stochastic averaging can estimate the probabilistic response of such systems at a reduced computational cost. In this paper, the Hilbert transform based stochastic averaging is used to model the output voltage amplitude as a Markovian stochastic process with dynamics governed by a stochastic differential equation with nonlinear drift and diffusion terms. Moreover, the voltage amplitude dependent damping and stiffness terms are determined via an appropriate equivalent linearization, and the stationary probability distribution of the output voltage amplitude is obtained analytically by solving the corresponding Fokker–Plank equation. Two examples are used to demonstrate the accuracy of the obtained analytical probability distributions via comparisons with Monte Carlo simulation data.
{"title":"Electrical Response Estimation of Vibratory Energy Harvesters via Hilbert Transform Based Stochastic Averaging","authors":"K. R. D. dos Santos","doi":"10.1115/1.4062704","DOIUrl":"https://doi.org/10.1115/1.4062704","url":null,"abstract":"\u0000 Converting mechanical vibrations into electrical power with vibratory energy harvesters can ensure the portability, efficiency, and sustainability of electronic devices and batteries. Vibratory energy harvesters are typically modeled as nonlinear oscillators subject to random excitation, and their design requires a complete characterization of their probabilistic responses. However, simulation techniques such as Monte Carlo are computationally prohibitive when the accurate estimation of the response probability distribution is needed. Alternatively, approximate methods such as stochastic averaging can estimate the probabilistic response of such systems at a reduced computational cost. In this paper, the Hilbert transform based stochastic averaging is used to model the output voltage amplitude as a Markovian stochastic process with dynamics governed by a stochastic differential equation with nonlinear drift and diffusion terms. Moreover, the voltage amplitude dependent damping and stiffness terms are determined via an appropriate equivalent linearization, and the stationary probability distribution of the output voltage amplitude is obtained analytically by solving the corresponding Fokker–Plank equation. Two examples are used to demonstrate the accuracy of the obtained analytical probability distributions via comparisons with Monte Carlo simulation data.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"9 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86435027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin D. Trump, A. Jin, S. Galaitsi, Christopher Cummings, H. Jarman, S. Greer, Vidur Sharma, I. Linkov
Equitable allocation and distribution of the COVID-19 vaccine have proven to be a major policy challenge exacerbated by incomplete pandemic risk data. To rectify this shortcoming, a three-step data visualization methodology was developed to assess COVID-19 vaccination equity in the United States using state health department, U.S. Census, and CDC data. Part one establishes an equitable pathway deviation index to identify populations with limited vaccination. Part two measures perceived access and public intentions to vaccinate over time. Part three synthesizes these data with the social vulnerability index to identify areas and communities at particular risk. Results demonstrate significant equity differences at a census-tract level, and across demographic and socioeconomic population characteristics. Results were used by various federal agencies to improve coordinated pandemic risk response and implement a commitment to equity as defined by the Executive Order regarding COVID-19 vaccination and booster policy. This methodology can be utilized in other fields where addressing the difficulties of promoting health equity in public policy is essential.
{"title":"Equitable Response in Crisis: Methodology and Application for COVID-19","authors":"Benjamin D. Trump, A. Jin, S. Galaitsi, Christopher Cummings, H. Jarman, S. Greer, Vidur Sharma, I. Linkov","doi":"10.1115/1.4062683","DOIUrl":"https://doi.org/10.1115/1.4062683","url":null,"abstract":"\u0000 Equitable allocation and distribution of the COVID-19 vaccine have proven to be a major policy challenge exacerbated by incomplete pandemic risk data. To rectify this shortcoming, a three-step data visualization methodology was developed to assess COVID-19 vaccination equity in the United States using state health department, U.S. Census, and CDC data. Part one establishes an equitable pathway deviation index to identify populations with limited vaccination. Part two measures perceived access and public intentions to vaccinate over time. Part three synthesizes these data with the social vulnerability index to identify areas and communities at particular risk. Results demonstrate significant equity differences at a census-tract level, and across demographic and socioeconomic population characteristics. Results were used by various federal agencies to improve coordinated pandemic risk response and implement a commitment to equity as defined by the Executive Order regarding COVID-19 vaccination and booster policy. This methodology can be utilized in other fields where addressing the difficulties of promoting health equity in public policy is essential.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"78 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85854880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
U.S. Gulf Coast refineries account for over half of the total refining capacity of the nation. However, less than a third of products refined in this region are used to supply local markets. Due to the highly centralized nature of the U.S. petroleum distribution network, disruptions affecting Gulf Coast refineries can have widespread impacts. The objective of this study is to develop a sufficient predictive model for the likelihood and expected duration of refinery shutdowns under hurricane hazards. Such models are currently lacking in the literature yet essential for risk modeling of the cascading consequences of refinery shutdown ranging from resilience analyses of petroleum networks to potential health effects on surrounding communities tied to startup and shutdown activities. A database of empirical refinery downtime and storm hazards data is developed, and statistical analyses are conducted to explore the relationship between refinery and storm characteristics and shutdown duration. The proposed method with the highest predictive accuracy is found to be a model comprised of a logistic regression binary classification component related to refinery shutdown potential and a Poisson distribution generalized linear model component related to downtime duration determination. To illustrate the utility of the newly developed model, a case study is conducted exploring the impact of two storms affecting the Houston Ship Channel and surrounding region. Both the regional refining resilience as well as the distribution network resilience are quantified, including uncertainty propagation. Such analyses reveal local community to nationwide impacts of refining disruptions and can support resilience enhancement decisions.
{"title":"Development and Application of a Predictive Model for Estimating Refinery Shutdown Duration and Resilience Impacts Due to Hurricane Hazards","authors":"Kendall M. Capshaw, J. Padgett","doi":"10.1115/1.4062681","DOIUrl":"https://doi.org/10.1115/1.4062681","url":null,"abstract":"\u0000 U.S. Gulf Coast refineries account for over half of the total refining capacity of the nation. However, less than a third of products refined in this region are used to supply local markets. Due to the highly centralized nature of the U.S. petroleum distribution network, disruptions affecting Gulf Coast refineries can have widespread impacts. The objective of this study is to develop a sufficient predictive model for the likelihood and expected duration of refinery shutdowns under hurricane hazards. Such models are currently lacking in the literature yet essential for risk modeling of the cascading consequences of refinery shutdown ranging from resilience analyses of petroleum networks to potential health effects on surrounding communities tied to startup and shutdown activities. A database of empirical refinery downtime and storm hazards data is developed, and statistical analyses are conducted to explore the relationship between refinery and storm characteristics and shutdown duration. The proposed method with the highest predictive accuracy is found to be a model comprised of a logistic regression binary classification component related to refinery shutdown potential and a Poisson distribution generalized linear model component related to downtime duration determination. To illustrate the utility of the newly developed model, a case study is conducted exploring the impact of two storms affecting the Houston Ship Channel and surrounding region. Both the regional refining resilience as well as the distribution network resilience are quantified, including uncertainty propagation. Such analyses reveal local community to nationwide impacts of refining disruptions and can support resilience enhancement decisions.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"59 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80537312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the pipeline network being the safest mode of oil and gas transportation systems, the pipeline failure rate has increased significantly over the last decade, particularly for aging pipelines. Predicting failure risk and prioritizing the riskiest asset from a large set of pipelines is one of the demanding tasks for the utilities. Machine Learning (ML) application in pipeline failure risk prediction has recently shown promising results. However, due to safety and security concerns, obtaining sufficient operation and failure data to train ML models accurately is a significant challenge. This study employed a Generative Adversarial Network (GAN) based framework to generate synthetic pipeline data (DSyn, N=100) based on a subset (70%) of experimental burst test results data (DExp) compiled from the literature (N= 92) to overcome the limitation of accessing operational data. The proposed framework was tested on (1) real data, and (2) combined real and generated synthetic data. The burst failure risk of corroded oil and gas pipelines was determined using probabilistic approaches, and pipelines were classified into two classes: (1) low risk (pf:0-0.5) and (2) high risk (pf:>0.5). Two Random Forest (RF) models (MExp and MComb) were trained using a subset of actual experimental pipeline data (DExp, N=64) and combined data (DExp + DSyn, N=164). These models were validated on the remaining subset (30%) of experimental test data (N=28). The validation results reveal that adding synthetic data can further improve the performance of the ML models. The area under the ROC Curve was found to be 0.96 and 0.99 for real model (MExp) and combined model (MComb) data, respectively. The combined model with improved performance can be used in strategic oil and gas pipeline resilience improvement planning, which sets long-term critical decisions regarding maintenance and potential replacement of pipes.
{"title":"Synthetic Data Generation Using Generative Adversarial Network (gan) for Burst Failure Risk Analysis of Oil and Gas Pipelines","authors":"R. K. Mazumder, Gourav Modanwal, Yue Li","doi":"10.1115/1.4062741","DOIUrl":"https://doi.org/10.1115/1.4062741","url":null,"abstract":"\u0000 Despite the pipeline network being the safest mode of oil and gas transportation systems, the pipeline failure rate has increased significantly over the last decade, particularly for aging pipelines. Predicting failure risk and prioritizing the riskiest asset from a large set of pipelines is one of the demanding tasks for the utilities. Machine Learning (ML) application in pipeline failure risk prediction has recently shown promising results. However, due to safety and security concerns, obtaining sufficient operation and failure data to train ML models accurately is a significant challenge. This study employed a Generative Adversarial Network (GAN) based framework to generate synthetic pipeline data (DSyn, N=100) based on a subset (70%) of experimental burst test results data (DExp) compiled from the literature (N= 92) to overcome the limitation of accessing operational data. The proposed framework was tested on (1) real data, and (2) combined real and generated synthetic data. The burst failure risk of corroded oil and gas pipelines was determined using probabilistic approaches, and pipelines were classified into two classes: (1) low risk (pf:0-0.5) and (2) high risk (pf:>0.5). Two Random Forest (RF) models (MExp and MComb) were trained using a subset of actual experimental pipeline data (DExp, N=64) and combined data (DExp + DSyn, N=164). These models were validated on the remaining subset (30%) of experimental test data (N=28). The validation results reveal that adding synthetic data can further improve the performance of the ML models. The area under the ROC Curve was found to be 0.96 and 0.99 for real model (MExp) and combined model (MComb) data, respectively. The combined model with improved performance can be used in strategic oil and gas pipeline resilience improvement planning, which sets long-term critical decisions regarding maintenance and potential replacement of pipes.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"1 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87161180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Engineering design and technological risk assessment both entail learning or discovering new knowledge. Optimal learning is a procedure whereby new knowledge is obtained while minimizing some specific measure of effort (e.g., time or money expended). A paradox is a statement that appears self-contradictory, contrary to common sense, or simply wrong, and yet might be true. The paradox of optimal learning is the assertion that a learning procedure cannot be optimized a priori—when designing the procedure—if the procedure depends on knowledge that the learning itself is intended to obtain. This is called a reflexive learning procedure. Many learning procedures can be optimized a priori. However, a priori optimization of a reflexive learning procedure is (usually) not possible. Most (but not all) reflexive learning procedures cannot be optimized without repeatedly implementing the procedure which may be very expensive. We discuss the prevalence of reflexive learning and present examples of the paradox. We also characterize those situations in which a reflexive learning procedure can be optimized. We discuss a response to the paradox (when it holds) based on the concept of robustness to uncertainty as developed in info-gap decision theory. We explain that maximizing the robustness is complementary to—but distinct from—minimizing a measure of effort of the learning procedure.
{"title":"Paradox of Optimal Learning: An Info-Gap Perspective","authors":"Y. Ben-Haim, S. Cogan","doi":"10.1115/1.4062511","DOIUrl":"https://doi.org/10.1115/1.4062511","url":null,"abstract":"\u0000 Engineering design and technological risk assessment both entail learning or discovering new knowledge. Optimal learning is a procedure whereby new knowledge is obtained while minimizing some specific measure of effort (e.g., time or money expended). A paradox is a statement that appears self-contradictory, contrary to common sense, or simply wrong, and yet might be true. The paradox of optimal learning is the assertion that a learning procedure cannot be optimized a priori—when designing the procedure—if the procedure depends on knowledge that the learning itself is intended to obtain. This is called a reflexive learning procedure. Many learning procedures can be optimized a priori. However, a priori optimization of a reflexive learning procedure is (usually) not possible. Most (but not all) reflexive learning procedures cannot be optimized without repeatedly implementing the procedure which may be very expensive. We discuss the prevalence of reflexive learning and present examples of the paradox. We also characterize those situations in which a reflexive learning procedure can be optimized. We discuss a response to the paradox (when it holds) based on the concept of robustness to uncertainty as developed in info-gap decision theory. We explain that maximizing the robustness is complementary to—but distinct from—minimizing a measure of effort of the learning procedure.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"69 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89145456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Civil infrastructure systems become highly complex and thus get more vulnerable to disasters. The concept of disaster resilience, the overall capability of a system to manage risks posed by catastrophic events, is emerging to address the challenge. Recently, a system-reliability-based disaster resilience analysis framework was proposed for a holistic assessment of the components' reliability, the system's redundancy, and the society's ability to recover the system functionality. The proposed framework was applied to individual structures to produce diagrams visualizing the pairs of the reliability index (β) and the redundancy index (p) defined to quantify the likelihood of each initial disruption scenario and the corresponding system-level failure probability, respectively. This paper develops methods to apply the β-p analysis framework to infrastructure networks and demonstrates its capability to evaluate the disaster resilience of networks from a system reliability viewpoint. We also propose a new causality-based importance measure of network components based on the β-p analysis and a causal diagram model that can consider the causality mechanism of the system failure. Compared with importance measures in the literature, the proposed measure can evaluate a component's relative importance through a well-balanced consideration of network topology and reliability. The proposed measure is expected to provide helpful guidelines for making optimal decisions to secure the disaster resilience of infrastructure networks.
{"title":"System-Reliability-Based Disaster Resilience Analysis of Infrastructure Networks and Causality-Based Importance Measure","authors":"Youngjun Kwon, Junho Song","doi":"10.1115/1.4062682","DOIUrl":"https://doi.org/10.1115/1.4062682","url":null,"abstract":"\u0000 Civil infrastructure systems become highly complex and thus get more vulnerable to disasters. The concept of disaster resilience, the overall capability of a system to manage risks posed by catastrophic events, is emerging to address the challenge. Recently, a system-reliability-based disaster resilience analysis framework was proposed for a holistic assessment of the components' reliability, the system's redundancy, and the society's ability to recover the system functionality. The proposed framework was applied to individual structures to produce diagrams visualizing the pairs of the reliability index (β) and the redundancy index (p) defined to quantify the likelihood of each initial disruption scenario and the corresponding system-level failure probability, respectively. This paper develops methods to apply the β-p analysis framework to infrastructure networks and demonstrates its capability to evaluate the disaster resilience of networks from a system reliability viewpoint. We also propose a new causality-based importance measure of network components based on the β-p analysis and a causal diagram model that can consider the causality mechanism of the system failure. Compared with importance measures in the literature, the proposed measure can evaluate a component's relative importance through a well-balanced consideration of network topology and reliability. The proposed measure is expected to provide helpful guidelines for making optimal decisions to secure the disaster resilience of infrastructure networks.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"40 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79298027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lincan Yan, Dave S Yantek, Cory R DeGennaro, Rohan D Fernando
Federal regulations require refuge alternatives (RAs) in underground coal mines to provide a life-sustaining environment for miners trapped underground when escape is impossible. A breathable air supply is among those requirements. For built-in-place (BIP) RAs, a borehole air supply (BAS) is commonly used to supply fresh air from the surface. Federal regulations require that such a BAS must supply fresh air at 12.5 cfm or more per person to maintain the oxygen concentration between 18.5% and 23% and carbon dioxide level below the 1% limit specified. However, it is unclear whether 12.5 cfm is indeed needed to maintain this carbon dioxide level. The minimal fresh air flow (FAF) rate needed to maintain the 1% CO2 level will depend on multiple factors, including the number of people and the volume of the BIP RA. In the past, to predict the interior CO2 concentration in an occupied RA, 96-h tests were performed using a physical human breathing simulator. However, given the infinite possibility of the combinations (number of people, size of the BIP RA), it would be impractical to fully investigate the range of parameters that can affect the CO2 concentration using physical tests. In this paper, researchers at the National Institute for Occupational Safety and Health (NIOSH) developed a model that can predict how the %CO2 in an occupied confined space changes with time given the number of occupants and the FAF rate. The model was then compared to and validated with test data. The benchmarked model can be used to predict the %CO2 for any number of people and FAF rate without conducting a 96-h test. The methodology used in this model can also be used to estimate other gas levels within a confined space.
联邦法规要求煤矿地下避难所(RA)在无法逃生的情况下为被困矿工提供维持生命的环境。可呼吸的空气供应是其中一项要求。对于内置式(BIP)避难硐室,通常使用钻孔供气装置(BAS)从地面提供新鲜空气。联邦法规规定,这种钻孔供气系统必须为每人提供 12.5 立方英尺/分或更多的新鲜空气,以保持氧气浓度在 18.5% 至 23% 之间,二氧化碳水平低于规定的 1% 限值。然而,是否真的需要 12.5 立方英尺/分钟来维持这一二氧化碳水平,目前尚不清楚。维持 1% 二氧化碳浓度所需的最小新鲜空气流量(FAF)取决于多种因素,包括人数和 BIP RA 的容积。过去,为了预测有人居住的室内空气中的二氧化碳浓度,曾使用物理人体呼吸模拟器进行过 96 小时的测试。然而,考虑到组合的无限可能性(人数、BIP RA 的大小),使用物理测试来全面研究可能影响二氧化碳浓度的参数范围是不切实际的。在本文中,美国国家职业安全与健康研究所 (NIOSH) 的研究人员建立了一个模型,该模型可以在考虑到人员数量和 FAF 率的情况下,预测被占用密闭空间中 CO2 的百分比是如何随时间变化的。该模型随后与测试数据进行了比较和验证。基准模型可用于预测任何人数和 FAF 率下的二氧化碳浓度,而无需进行 96 小时的测试。该模型中使用的方法也可用于估算密闭空间内的其他气体含量。
{"title":"Mathematical Modeling for Carbon Dioxide Level Within Confined Spaces.","authors":"Lincan Yan, Dave S Yantek, Cory R DeGennaro, Rohan D Fernando","doi":"10.1115/1.4055389","DOIUrl":"https://doi.org/10.1115/1.4055389","url":null,"abstract":"<p><p>Federal regulations require refuge alternatives (RAs) in underground coal mines to provide a life-sustaining environment for miners trapped underground when escape is impossible. A breathable air supply is among those requirements. For built-in-place (BIP) RAs, a borehole air supply (BAS) is commonly used to supply fresh air from the surface. Federal regulations require that such a BAS must supply fresh air at 12.5 cfm or more per person to maintain the oxygen concentration between 18.5% and 23% and carbon dioxide level below the 1% limit specified. However, it is unclear whether 12.5 cfm is indeed needed to maintain this carbon dioxide level. The minimal fresh air flow (FAF) rate needed to maintain the 1% CO<sub>2</sub> level will depend on multiple factors, including the number of people and the volume of the BIP RA. In the past, to predict the interior CO<sub>2</sub> concentration in an occupied RA, 96-h tests were performed using a physical human breathing simulator. However, given the infinite possibility of the combinations (number of people, size of the BIP RA), it would be impractical to fully investigate the range of parameters that can affect the CO<sub>2</sub> concentration using physical tests. In this paper, researchers at the National Institute for Occupational Safety and Health (NIOSH) developed a model that can predict how the %CO<sub>2</sub> in an occupied confined space changes with time given the number of occupants and the FAF rate. The model was then compared to and validated with test data. The benchmarked model can be used to predict the %CO<sub>2</sub> for any number of people and FAF rate without conducting a 96-h test. The methodology used in this model can also be used to estimate other gas levels within a confined space.</p>","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"9 2","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10772919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139404726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koen van Mierlo, Augustin Persoons, M. Faes, D. Moens
Robust design optimisation of stochastic black-box functions is a challenging task in engineering practice. Crashworthiness optimisation qualifies as such problem especially with regards to the high computational costs. Moreover, in early design phases, there may be significant uncertainty about the numerical model parameters. Therefore, this paper proposes an adaptive surrogate-based strategy for robust design optimisation of noise-contaminated models under lack-of-knowledge uncertainty. This approach is a significant extension to the Robustness under Lack-of-Knowledge method (RULOK) previously introduced by the authors, which was limited to noise-free models. In this work it is proposed to use a Gaussian Process as a regression model based on a noisy kernel. The learning process is adapted to account for noise variance either imposed and known or empirically learned as part of the learning process. The method is demonstrated on three analytical benchmarks and one engineering crashworthiness optimisation problem. In the case studies, multiple ways of determining the noise kernel are investigated: (1) based on a coefficient of variation, (2) calibration in the Gaussian Process model, (3) based on engineering judgement, including a study of the sensitivity of the result with respect to these parameters. The results highlight that the proposed method is able to efficiently identify a robust design point even with extremely limited or biased prior knowledge about the noise.
{"title":"Robust Design Optimization of Expensive Stochastic Simulators Under Lack-of-Knowledge","authors":"Koen van Mierlo, Augustin Persoons, M. Faes, D. Moens","doi":"10.1115/1.4056950","DOIUrl":"https://doi.org/10.1115/1.4056950","url":null,"abstract":"\u0000 Robust design optimisation of stochastic black-box functions is a challenging task in engineering practice. Crashworthiness optimisation qualifies as such problem especially with regards to the high computational costs. Moreover, in early design phases, there may be significant uncertainty about the numerical model parameters. Therefore, this paper proposes an adaptive surrogate-based strategy for robust design optimisation of noise-contaminated models under lack-of-knowledge uncertainty. This approach is a significant extension to the Robustness under Lack-of-Knowledge method (RULOK) previously introduced by the authors, which was limited to noise-free models. In this work it is proposed to use a Gaussian Process as a regression model based on a noisy kernel. The learning process is adapted to account for noise variance either imposed and known or empirically learned as part of the learning process. The method is demonstrated on three analytical benchmarks and one engineering crashworthiness optimisation problem. In the case studies, multiple ways of determining the noise kernel are investigated: (1) based on a coefficient of variation, (2) calibration in the Gaussian Process model, (3) based on engineering judgement, including a study of the sensitivity of the result with respect to these parameters. The results highlight that the proposed method is able to efficiently identify a robust design point even with extremely limited or biased prior knowledge about the noise.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"69 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85972378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adolphus Lye, Luca Marino, A. Cicirello, E. Patelli
Several online identification approaches have been proposed to identify parameters and evolution models of engineering systems and structures when sequential datasets are available via Bayesian inference. In this work, a robust and “tune-free” sampler is proposed to extend one of the Sequential Monte Carlo implementations for the identification of time-varying parameters which can be assumed constant within each set of data collected, but might vary across different sequences of data sets. The proposed approach involves the implementation of the Affine-invariant Ensemble sampler in place of the Metropolis-Hastings sampler to update the samples. An adaptive-tuning algorithm is also proposed to automatically tune the step size of the Affine-invariant ensemble sampler which, in turn, controls the acceptance rate of the samples across iterations. Furthermore, a numerical investigation behind the existence of inherent lower and upper bounds on the acceptance rate, making the algorithm robust by design, is also conducted. The proposed method allows for the offline and online identification of the most probable models under uncertainty. It works independently of the underlying (often unknown) error model. The proposed sampling strategy is first verified against the existing sequential Monte Carlo sampler in a numerical example. Then, it is validated by identifying the time-varying parameters and the most probable model of a non-linear dynamical system using experimental data.
{"title":"Sequential Ensemble Monte Carlo Sampler for On-Line Bayesian Inference of Time-Varying Parameter In Engineering Applications","authors":"Adolphus Lye, Luca Marino, A. Cicirello, E. Patelli","doi":"10.1115/1.4056934","DOIUrl":"https://doi.org/10.1115/1.4056934","url":null,"abstract":"\u0000 Several online identification approaches have been proposed to identify parameters and evolution models of engineering systems and structures when sequential datasets are available via Bayesian inference. In this work, a robust and “tune-free” sampler is proposed to extend one of the Sequential Monte Carlo implementations for the identification of time-varying parameters which can be assumed constant within each set of data collected, but might vary across different sequences of data sets. The proposed approach involves the implementation of the Affine-invariant Ensemble sampler in place of the Metropolis-Hastings sampler to update the samples. An adaptive-tuning algorithm is also proposed to automatically tune the step size of the Affine-invariant ensemble sampler which, in turn, controls the acceptance rate of the samples across iterations. Furthermore, a numerical investigation behind the existence of inherent lower and upper bounds on the acceptance rate, making the algorithm robust by design, is also conducted. The proposed method allows for the offline and online identification of the most probable models under uncertainty. It works independently of the underlying (often unknown) error model. The proposed sampling strategy is first verified against the existing sequential Monte Carlo sampler in a numerical example. Then, it is validated by identifying the time-varying parameters and the most probable model of a non-linear dynamical system using experimental data.","PeriodicalId":44694,"journal":{"name":"ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems Part B-Mechanical Engineering","volume":"119 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77934419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}