Michael Beyer, Christoph Schorn, T. Fabarisov, A. Morozov, K. Janschek
Designing optimal neural network (NN) architectures is a difficult and time-consuming task, especially when error resiliency and hardware efficiency are considered simultaneously. In our paper, we extend neural architecture search (NAS) to also optimize a NN’s error resilience and hardware related metrics in addition to classification accuarcy. To this end, we consider the error sensitivity of a NN on the architecture-level during NAS and additionally incorporate checksums into the network as an external error detection mechanism. With an additional computational overhead as low as 17% for the discovered architectures, checksums are an efficient method to effectively enhance the error resilience of NNs. Furthermore, the results show that cell-based NN architectures are able to maintain their error resilience characteristics when transferred to other tasks.
{"title":"Automated Hardening of Deep Neural Network Architectures","authors":"Michael Beyer, Christoph Schorn, T. Fabarisov, A. Morozov, K. Janschek","doi":"10.1115/imece2021-72891","DOIUrl":"https://doi.org/10.1115/imece2021-72891","url":null,"abstract":"\u0000 Designing optimal neural network (NN) architectures is a difficult and time-consuming task, especially when error resiliency and hardware efficiency are considered simultaneously. In our paper, we extend neural architecture search (NAS) to also optimize a NN’s error resilience and hardware related metrics in addition to classification accuarcy. To this end, we consider the error sensitivity of a NN on the architecture-level during NAS and additionally incorporate checksums into the network as an external error detection mechanism. With an additional computational overhead as low as 17% for the discovered architectures, checksums are an efficient method to effectively enhance the error resilience of NNs. Furthermore, the results show that cell-based NN architectures are able to maintain their error resilience characteristics when transferred to other tasks.","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131693640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuel Müller, Natalie Schinzel, N. Jazdi, M. Weyrich
Autonomous mobile robots with manipulators are becoming increasingly important in industry because they are the most flexible type of mobile robots. Therefore, these mobile robots are employed in dynamic, heterogeneous and partly structured environments. However, as depicted in ANSI/RIA R15.08-1-2020, new safety requirements focusing the fitness of the mobile robots to its operational scenarios arise. In contrast to fenced industrial robots, mobile robots need to reason about their environment, specifically the monitored space and adapt accordingly. However, since the monitored space is often limited, today’s autonomous industrial mobile robots type C (IMR-C) as R15.08-1-2020 names this kind of robots waste much potential reducing their pace of work due to limited monitored space. To counteract this issue, we first analyze the effects of limited monitored space using system theoretic process analysis (STPA) and then come up with a novel monitoring tool closing the blind spots of the IMR-Cs. The comparison of the STPAs with and without external monitoring tools show risk reduction in the original loss scenarios but new loss scenarios and consuming effort in order to assemble them. We present a methodology to weight the gains and losses of assembling an additional monitoring tool using the Digital Twin. The evaluation of our prototype in a goods receipt scenario shows promising results.
{"title":"An Approach for Safeguarding Autonomous Mobile Robots Using Monitoring Tools","authors":"Manuel Müller, Natalie Schinzel, N. Jazdi, M. Weyrich","doi":"10.1115/imece2021-73087","DOIUrl":"https://doi.org/10.1115/imece2021-73087","url":null,"abstract":"\u0000 Autonomous mobile robots with manipulators are becoming increasingly important in industry because they are the most flexible type of mobile robots. Therefore, these mobile robots are employed in dynamic, heterogeneous and partly structured environments. However, as depicted in ANSI/RIA R15.08-1-2020, new safety requirements focusing the fitness of the mobile robots to its operational scenarios arise. In contrast to fenced industrial robots, mobile robots need to reason about their environment, specifically the monitored space and adapt accordingly. However, since the monitored space is often limited, today’s autonomous industrial mobile robots type C (IMR-C) as R15.08-1-2020 names this kind of robots waste much potential reducing their pace of work due to limited monitored space. To counteract this issue, we first analyze the effects of limited monitored space using system theoretic process analysis (STPA) and then come up with a novel monitoring tool closing the blind spots of the IMR-Cs. The comparison of the STPAs with and without external monitoring tools show risk reduction in the original loss scenarios but new loss scenarios and consuming effort in order to assemble them. We present a methodology to weight the gains and losses of assembling an additional monitoring tool using the Digital Twin. The evaluation of our prototype in a goods receipt scenario shows promising results.","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133464713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frisbee is a popular item for both entertainment and professional sport. The aerodynamics study of the frisbee is crucial to understand its movement and optimize its design. In this paper, I used basic fluid analysis to investigate the effects of parameters on the flying trajectory, as well as maximum distance and maximum height of a frisbee. First, the aerodynamic forces on a moving frisbee, including lift force, drag force, and gravity, were analyzed. Second, a set of governing equations describing the movement of the frisbee was derived. Third, a Matlab-based program to evaluate the trajectory of a flying frisbee was developed. Finally, I designed the user graphic interface and published the software for public use. By inputting the settings of the frisbee movement, such as frisbee diameter, initial velocity, attack angle, wind speed, etc., any user, without knowing the aerodynamic theories, can use this software to quickly determine the trajectory and the maximum distance of a moving frisbee. With the developed software, I investigated how the parameters influence the aerodynamic characteristics and a frisbee’s flying performance.
{"title":"A Prediction Software to Evaluate Frisbee Movement","authors":"Han Yang","doi":"10.1115/imece2021-70925","DOIUrl":"https://doi.org/10.1115/imece2021-70925","url":null,"abstract":"\u0000 Frisbee is a popular item for both entertainment and professional sport. The aerodynamics study of the frisbee is crucial to understand its movement and optimize its design. In this paper, I used basic fluid analysis to investigate the effects of parameters on the flying trajectory, as well as maximum distance and maximum height of a frisbee. First, the aerodynamic forces on a moving frisbee, including lift force, drag force, and gravity, were analyzed. Second, a set of governing equations describing the movement of the frisbee was derived. Third, a Matlab-based program to evaluate the trajectory of a flying frisbee was developed. Finally, I designed the user graphic interface and published the software for public use. By inputting the settings of the frisbee movement, such as frisbee diameter, initial velocity, attack angle, wind speed, etc., any user, without knowing the aerodynamic theories, can use this software to quickly determine the trajectory and the maximum distance of a moving frisbee. With the developed software, I investigated how the parameters influence the aerodynamic characteristics and a frisbee’s flying performance.","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114474737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Safety–critical protective systems mitigate possible collateral harm to the public, when randomly occurring initiating events challenge the operational integrity of hazardous technologies. Quantifying the efficacy of protection remains a challenge to engineers and regulators responsible for safety. In this paper, we will explore the analytical relationship between protective system reliability and safety efficacy. Central to our discussions is the understanding that: Not only should protective systems be reliable over time, but they must be highly effective at the exact instants of initiating event arrivals. Extending traditional system reliability analyses to quantify the effectiveness of protective systems that are challenged by potentially catastrophic initiating events, requires identifying and redressing certain modeling pitfalls that are counterintuitive. It is our purpose, here, to reveal some of these pitfalls by appealing to well known results from system reliability theory and the theory of stochastic point processes.
{"title":"The Role of Protective System Reliability Analysis in the Study of System Safety","authors":"M. Wortman, E. Kee, P. Kannan","doi":"10.1115/imece2021-69562","DOIUrl":"https://doi.org/10.1115/imece2021-69562","url":null,"abstract":"\u0000 Safety–critical protective systems mitigate possible collateral harm to the public, when randomly occurring initiating events challenge the operational integrity of hazardous technologies. Quantifying the efficacy of protection remains a challenge to engineers and regulators responsible for safety. In this paper, we will explore the analytical relationship between protective system reliability and safety efficacy. Central to our discussions is the understanding that: Not only should protective systems be reliable over time, but they must be highly effective at the exact instants of initiating event arrivals. Extending traditional system reliability analyses to quantify the effectiveness of protective systems that are challenged by potentially catastrophic initiating events, requires identifying and redressing certain modeling pitfalls that are counterintuitive. It is our purpose, here, to reveal some of these pitfalls by appealing to well known results from system reliability theory and the theory of stochastic point processes.","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"58 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120885745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to impact design decisions and realize the full promise of high-fidelity computational tools, simulation results must be integrated at the earliest stages in the design process. This is particularly challenging when dealing with uncertainty and optimizing for system-level performance metrics as full-system models (often notoriously expensive and time-consuming to develop) are generally required to propagate uncertainties to system-level quantities of interest. Methods for propagating parameter and boundary condition uncertainty in networks of interconnected components hold promise for enabling design under uncertainty in real-world applications. These methods preclude the need for time consuming mesh generation of full-system geometries when changes are made to components or subassemblies. Additionally, they explicitly tie full-system model predictions to component/subassembly validation data which is valuable for qualification. This is accomplished by taking advantage of the fact that many engineered systems are inherently modular, being comprised of a hierarchy of components and subassemblies which are individually modified or replaced to define new system designs. We leverage this hierarchical structure to enable rapid model development and the incorporation of uncertainty quantification and rigorous sensitivity analysis earlier in the design process. The resulting formulation of the uncertainty propagation problem is iterative. We express the system model as a network of interconnected component models which exchange stochastic solution information at component boundaries. We utilize Jacobi iteration with Anderson acceleration to converge stochastic representations of system level quantities of interest through successive evaluations of component or subassembly forward problems. We publish our open-source tools for uncertainty propagation in networks remarking that these tools are extensible and can be used with any simulation tool (including arbitrary surrogate modeling tools) through the construction of a simple Python interface class. Additional interface classes for a variety of simulation tools are currently under active development. The performance of the uncertainty quantification method is determined by the number of iterations needed to achieve a desired level of accuracy. Performance of these networks for simple canonical systems from both a heat transfer and solid mechanics perspective are investigated; the models are examined with thermal and mechanical Dirichlet and Neumann type boundary conditions separately imposed and the impact of varying governing equations and boundary condition type on the performance of the networks is analyzed. The form of the boundary conditions is observed to have a large impact on the convergence rate with Neumann-type boundary conditions corresponding to significant performance degradation compared to the Dirichlet boundary conditions. Nonmonotonicity is observed in the solution converge
{"title":"Performance of Iterative Network Uncertainty Quantification for Multicomponent System Qualification","authors":"E. Rojas, John Tencer","doi":"10.1115/imece2021-72345","DOIUrl":"https://doi.org/10.1115/imece2021-72345","url":null,"abstract":"\u0000 In order to impact design decisions and realize the full promise of high-fidelity computational tools, simulation results must be integrated at the earliest stages in the design process. This is particularly challenging when dealing with uncertainty and optimizing for system-level performance metrics as full-system models (often notoriously expensive and time-consuming to develop) are generally required to propagate uncertainties to system-level quantities of interest. Methods for propagating parameter and boundary condition uncertainty in networks of interconnected components hold promise for enabling design under uncertainty in real-world applications. These methods preclude the need for time consuming mesh generation of full-system geometries when changes are made to components or subassemblies. Additionally, they explicitly tie full-system model predictions to component/subassembly validation data which is valuable for qualification. This is accomplished by taking advantage of the fact that many engineered systems are inherently modular, being comprised of a hierarchy of components and subassemblies which are individually modified or replaced to define new system designs. We leverage this hierarchical structure to enable rapid model development and the incorporation of uncertainty quantification and rigorous sensitivity analysis earlier in the design process.\u0000 The resulting formulation of the uncertainty propagation problem is iterative. We express the system model as a network of interconnected component models which exchange stochastic solution information at component boundaries. We utilize Jacobi iteration with Anderson acceleration to converge stochastic representations of system level quantities of interest through successive evaluations of component or subassembly forward problems. We publish our open-source tools for uncertainty propagation in networks remarking that these tools are extensible and can be used with any simulation tool (including arbitrary surrogate modeling tools) through the construction of a simple Python interface class. Additional interface classes for a variety of simulation tools are currently under active development.\u0000 The performance of the uncertainty quantification method is determined by the number of iterations needed to achieve a desired level of accuracy. Performance of these networks for simple canonical systems from both a heat transfer and solid mechanics perspective are investigated; the models are examined with thermal and mechanical Dirichlet and Neumann type boundary conditions separately imposed and the impact of varying governing equations and boundary condition type on the performance of the networks is analyzed. The form of the boundary conditions is observed to have a large impact on the convergence rate with Neumann-type boundary conditions corresponding to significant performance degradation compared to the Dirichlet boundary conditions. Nonmonotonicity is observed in the solution converge","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121090667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carrington Chun, Joseph McBride, Kaveh Torabzadeh, Andrew Smith, Santana Roberts
Thousands of balloon-assisted meteorological sensor packages, known as radiosondes, are launched every day from various monitoring stations across the continental United States. However, only a small fraction of these instrument payloads are ever recovered, with most ending up as hazardous electronics waste strewn across the country. By creating a terrestrial landing system that can be retrofitted to common commercially available radiosondes, the landing survivability of these instrument payloads may be able to be improved. Furthermore, such a landing platform could also support continued meteorological data acquisition and transmission, allowing the radiosonde to transition from high-altitude monitoring to surface level sensor monitoring. Not only would such a terrestrial mission extension module fitted to a radiosonde drastically increase the potential utility of an existing radiosonde, but such a device could also improve radiosonde recovery rates, and therefore reduce the electronics waste being produced by regular weather balloon launches.
{"title":"Terrestrial Mission Extender for Weather Balloon Radiosonde","authors":"Carrington Chun, Joseph McBride, Kaveh Torabzadeh, Andrew Smith, Santana Roberts","doi":"10.1115/imece2021-69459","DOIUrl":"https://doi.org/10.1115/imece2021-69459","url":null,"abstract":"\u0000 Thousands of balloon-assisted meteorological sensor packages, known as radiosondes, are launched every day from various monitoring stations across the continental United States. However, only a small fraction of these instrument payloads are ever recovered, with most ending up as hazardous electronics waste strewn across the country. By creating a terrestrial landing system that can be retrofitted to common commercially available radiosondes, the landing survivability of these instrument payloads may be able to be improved. Furthermore, such a landing platform could also support continued meteorological data acquisition and transmission, allowing the radiosonde to transition from high-altitude monitoring to surface level sensor monitoring. Not only would such a terrestrial mission extension module fitted to a radiosonde drastically increase the potential utility of an existing radiosonde, but such a device could also improve radiosonde recovery rates, and therefore reduce the electronics waste being produced by regular weather balloon launches.","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125334682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Zarifian, Divya Garikapati, Julia Pralle, Jennifer Dawson, Constantin Hubmann, Brielle Reiff, Raymond Tam, Gopi Gaddamadugu
As a relatively nascent field, engineers developing autonomous vehicle (AV) technologies need frequent performance feedback on whether algorithms are performing the driving task competently. Further, because of the complexity of AV systems, it is often lower risk to frequently test small, incremental changes instead of delaying testing and accumulating a large number of changes to the algorithms. While simulation and closed course testing are useful and critically important tools, ultimately driving on public roads is necessary to truly understand system performance and identify potential edge cases. Maintaining a high safety standard to protect all road users during continual public road testing is of paramount importance for the AV industry. The Waterfall methodology has a demonstrated track record for product safety, but does not provide much flexibility for prototyping and incremental testing. The Agile methodology is famous for enabling rapid development and incremental rollouts, but does not possess any inherent safety gates. When it comes to developing complex safety-critical autonomy features, particularly for dynamic environments such as in the case of autonomous vehicles, neither method is fitting. This paper presents a hybrid methodology that strikes a balance between safe and rapid development of autonomy features for the AV industry.
{"title":"A Hybrid Methodology for Risk Mitigation During Development of Safety-Critical Autonomy Features","authors":"P. Zarifian, Divya Garikapati, Julia Pralle, Jennifer Dawson, Constantin Hubmann, Brielle Reiff, Raymond Tam, Gopi Gaddamadugu","doi":"10.1115/imece2021-69313","DOIUrl":"https://doi.org/10.1115/imece2021-69313","url":null,"abstract":"\u0000 As a relatively nascent field, engineers developing autonomous vehicle (AV) technologies need frequent performance feedback on whether algorithms are performing the driving task competently. Further, because of the complexity of AV systems, it is often lower risk to frequently test small, incremental changes instead of delaying testing and accumulating a large number of changes to the algorithms. While simulation and closed course testing are useful and critically important tools, ultimately driving on public roads is necessary to truly understand system performance and identify potential edge cases. Maintaining a high safety standard to protect all road users during continual public road testing is of paramount importance for the AV industry.\u0000 The Waterfall methodology has a demonstrated track record for product safety, but does not provide much flexibility for prototyping and incremental testing. The Agile methodology is famous for enabling rapid development and incremental rollouts, but does not possess any inherent safety gates. When it comes to developing complex safety-critical autonomy features, particularly for dynamic environments such as in the case of autonomous vehicles, neither method is fitting.\u0000 This paper presents a hybrid methodology that strikes a balance between safe and rapid development of autonomy features for the AV industry.","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133748391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nuclear energy is one of the most efficient types of electricity production. However, it is one of the biggest fears of people due to the potential radiation effects on human health. Despite the major developments in the nuclear sector, some gaps need to be studied for the higher safety scrutiny of nuclear power plants (NPPs). Besides technical advances for the safer management of an NPP, another important part is having a well-constructed and planned probabilistic risk assessment and management. Realistic probabilistic risk assessment and management provide proper emergency response in case of an accident or hazardous situation to human health. On the other hand, aside from the radiation emitted directly from radioactive sources inside the NPP, there may be indirect radiation emission from dispersions outside the plant’s protected area. For example, we can look at forest fires occurring in radioactively contaminated areas surrounding NPPs that suffered accidents with releases, such as Chernobyl or Fukushima Daiichi. Radioactive particles produced by burning contaminated forests could spread in the air and threaten public health. It has already been observed that fires in forests around Chernobyl can increase the level of radiation in the air. Such events have the possibility to occur in all areas where nuclear facilities are located. The forests contaminated after the Fukushima Daiichi NPP accident, resemble the ones at Chernobyl. This study aims to develop the knowledge for an early sensing and emergency response by doing an atmospheric dispersion modeling and supporting a probabilistic risk assessment for a wildfire scenario in radioactively contaminated areas, such as Chernobyl and Fukushima Daiichi. Also, this study provides a pathway to assessing the risk of nuclear contamination caused by wildfires around nuclear facilities.
{"title":"On the Modeling of Wildfires-Induced Release and Atmospheric Dispersion in Radioactively Contaminated Regions","authors":"Damla Polat, M. Diaconeasa","doi":"10.1115/imece2021-71460","DOIUrl":"https://doi.org/10.1115/imece2021-71460","url":null,"abstract":"\u0000 Nuclear energy is one of the most efficient types of electricity production. However, it is one of the biggest fears of people due to the potential radiation effects on human health. Despite the major developments in the nuclear sector, some gaps need to be studied for the higher safety scrutiny of nuclear power plants (NPPs). Besides technical advances for the safer management of an NPP, another important part is having a well-constructed and planned probabilistic risk assessment and management. Realistic probabilistic risk assessment and management provide proper emergency response in case of an accident or hazardous situation to human health. On the other hand, aside from the radiation emitted directly from radioactive sources inside the NPP, there may be indirect radiation emission from dispersions outside the plant’s protected area. For example, we can look at forest fires occurring in radioactively contaminated areas surrounding NPPs that suffered accidents with releases, such as Chernobyl or Fukushima Daiichi. Radioactive particles produced by burning contaminated forests could spread in the air and threaten public health. It has already been observed that fires in forests around Chernobyl can increase the level of radiation in the air. Such events have the possibility to occur in all areas where nuclear facilities are located. The forests contaminated after the Fukushima Daiichi NPP accident, resemble the ones at Chernobyl. This study aims to develop the knowledge for an early sensing and emergency response by doing an atmospheric dispersion modeling and supporting a probabilistic risk assessment for a wildfire scenario in radioactively contaminated areas, such as Chernobyl and Fukushima Daiichi. Also, this study provides a pathway to assessing the risk of nuclear contamination caused by wildfires around nuclear facilities.","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130413120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federal mining regulations in the United States mandate that underground coal mines install refuge alternatives (RA) for miners to seek refuge after an inescapable disaster. RAs are required to isolate and protect occupants from hazardous conditions and to provide a life-sustaining, breathable air environment for a minimum of 96 hours. According to federal RA regulations, an RA’s oxygen levels (%O2) must be maintained between 18.5%–23% with carbon dioxide levels (%CO2) less than 1%. Once an RA is occupied, due to human breathing, the %O2 can decrease, and %CO2 levels can increase quickly. One method of providing an RA with a breathable air environment is to use a borehole air supply (BAS) to provide fresh air from the surface, purge existing harmful gases, and prevent harmful gas build-up. RA regulations require air supplies to provide air at 12.5 cubic feet per minute (cfm) per person. To investigate the minimum fresh air flow (FAF) rate needed to maintain interior %O2 and %CO2 within the mandated ranges, researchers conducted testing in a modified shipping container that represented the volume of an RA. During these tests, propane (C3H8) combustion and additional CO2 supplied from cylinders were used to match human O2 consumption and CO2 generation. The FAF rate supplied to the shipping container was varied to determine the minimum FAF rate required for the %CO2 inside the shipping container to stabilize below 1%. The test results showed that the minimum FAF rate was between 1.76–2.12 cfm per person. Therefore, the mandated per-person FAF rate provides a 6x–7x safety factor. Test results also showed that the %O2 range requirement was satisfied for the entire range of tested FAF rates from 1.76–12.5 cfm per person. In this paper, researchers from the National Institute for Occupational Safety and Health (NIOSH) provide a repeatable test method that can be used to evaluate the FAF rate versus interior gas concentrations (%CO2 and %O2) for various occupancy levels to ensure a breathable air environment within a refuge alternative. This paper also discusses federal RA regulations and previous NIOSH research. Additionally, this paper provides an experimental concept and set-up description, including the C3H8 combustion and supplemental CO2 delivery with gas flow rates used to simulate human breathing, data collection sensors, laboratory modifications, and safety measures. Lastly, the paper discusses test results, including the amount of time taken to reach hazardous interior %CO2 and %O2, as well as %O2 and %CO2 resulting from several FAF rates that have been used to validate a predictive model. This test method could be adopted to evaluate breathable air environments in refuge alternatives and confined enclosures in various industries.
{"title":"Fresh Air Flow Required to Maintain Safe Carbon Dioxide Levels and Provide a Breathable Air Environment in a Refuge Alternative","authors":"C. DeGennaro, Lincan Yan, D. Yantek","doi":"10.1115/imece2021-68680","DOIUrl":"https://doi.org/10.1115/imece2021-68680","url":null,"abstract":"\u0000 Federal mining regulations in the United States mandate that underground coal mines install refuge alternatives (RA) for miners to seek refuge after an inescapable disaster. RAs are required to isolate and protect occupants from hazardous conditions and to provide a life-sustaining, breathable air environment for a minimum of 96 hours. According to federal RA regulations, an RA’s oxygen levels (%O2) must be maintained between 18.5%–23% with carbon dioxide levels (%CO2) less than 1%. Once an RA is occupied, due to human breathing, the %O2 can decrease, and %CO2 levels can increase quickly. One method of providing an RA with a breathable air environment is to use a borehole air supply (BAS) to provide fresh air from the surface, purge existing harmful gases, and prevent harmful gas build-up. RA regulations require air supplies to provide air at 12.5 cubic feet per minute (cfm) per person. To investigate the minimum fresh air flow (FAF) rate needed to maintain interior %O2 and %CO2 within the mandated ranges, researchers conducted testing in a modified shipping container that represented the volume of an RA. During these tests, propane (C3H8) combustion and additional CO2 supplied from cylinders were used to match human O2 consumption and CO2 generation. The FAF rate supplied to the shipping container was varied to determine the minimum FAF rate required for the %CO2 inside the shipping container to stabilize below 1%. The test results showed that the minimum FAF rate was between 1.76–2.12 cfm per person. Therefore, the mandated per-person FAF rate provides a 6x–7x safety factor. Test results also showed that the %O2 range requirement was satisfied for the entire range of tested FAF rates from 1.76–12.5 cfm per person.\u0000 In this paper, researchers from the National Institute for Occupational Safety and Health (NIOSH) provide a repeatable test method that can be used to evaluate the FAF rate versus interior gas concentrations (%CO2 and %O2) for various occupancy levels to ensure a breathable air environment within a refuge alternative. This paper also discusses federal RA regulations and previous NIOSH research. Additionally, this paper provides an experimental concept and set-up description, including the C3H8 combustion and supplemental CO2 delivery with gas flow rates used to simulate human breathing, data collection sensors, laboratory modifications, and safety measures. Lastly, the paper discusses test results, including the amount of time taken to reach hazardous interior %CO2 and %O2, as well as %O2 and %CO2 resulting from several FAF rates that have been used to validate a predictive model. This test method could be adopted to evaluate breathable air environments in refuge alternatives and confined enclosures in various industries.","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127141581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently we introduced a new model-based fault injection method implemented as a highly customizable Simulink block called FIBlock. It supports the injection of typical faults of essential heterogeneous components of Cyber-Physical Systems (CPS), such as sensors, computing hardware, and network. The FIBlock allows to tune a fault type and configure multiple parameters to tune error magnitude, fault activation time, and fault exposure duration. The FIBlock is able to generate various types of highly adjustable CPS faults. We demonstrated the performance of the FIBlock on a Simulink case study representing a lower-limb EXOLEGS exoskeleton, an assistive device for the elderly in everyday life. In particular, we discovered the spatial and temporal thresholds for different fault types. Upon exceeding said thresholds, the Dynamic Movement Primitives-based control system could no longer adequately compensate errors. In this paper, we proposed a new Deep Learning-based approach for system failure prevention. We employed the Long Short-Term Memory (LSTM) network for error detection and mitigation. Error detection is achieved using the prediction approach. The LSTM models are mitigating the detected errors with computed predictions only when they were subject to the imminent failure (i.e., exceeded the aforementioned thresholds). To compare our approach with previous findings, we trained two LSTM models on angular position and angular velocity signals. For evaluation, we performed fault injection experiments with varying fault effect parameters. The ‘Sensor freeze’ fault was injected into the angular position sensor, and the ‘Stuck-at 0’ fault was injected into angular velocity sensor. The presented Deep Learning-based approach prevented system failure even when the injected faults were substantially exceeding thresholds. In addition, reasoning for data access point choice has been evaluated. We compared two options: (i) the input data for LSTM is provided from the sensor output and (ii) from the controller output. In the paper, the pros and cons for both options are presented. We deployed the trained LSTM models on an Edge Tensor Processing Unit. For that, the models have been quantized, i.e. all the 32-bit floating-point numbers (such as weights and activation outputs) were converted to the nearest 8-bit fixed-point numbers and converted to the TensorFlow Lite models. The Coral USB Accelerator was coupled with a Raspberry Pi 4B for signal processing. The result proves the feasibility of the proposed method. Because the LSTM models were converted to the 8bit integer TensorFlow Lite models, it allowed firm real-time error mitigation. Furthermore, the light weight of the system and minimal power consumption allows its integration into wearable robotic systems.
{"title":"Deep Learning-Based Error Mitigation for Assistive Exoskeleton With Computational-Resource-Limited Platform and Edge Tensor Processing Unit","authors":"T. Fabarisov, A. Morozov, I. Mamaev, K. Janschek","doi":"10.1115/imece2021-70387","DOIUrl":"https://doi.org/10.1115/imece2021-70387","url":null,"abstract":"\u0000 Recently we introduced a new model-based fault injection method implemented as a highly customizable Simulink block called FIBlock. It supports the injection of typical faults of essential heterogeneous components of Cyber-Physical Systems (CPS), such as sensors, computing hardware, and network. The FIBlock allows to tune a fault type and configure multiple parameters to tune error magnitude, fault activation time, and fault exposure duration.\u0000 The FIBlock is able to generate various types of highly adjustable CPS faults. We demonstrated the performance of the FIBlock on a Simulink case study representing a lower-limb EXOLEGS exoskeleton, an assistive device for the elderly in everyday life. In particular, we discovered the spatial and temporal thresholds for different fault types. Upon exceeding said thresholds, the Dynamic Movement Primitives-based control system could no longer adequately compensate errors.\u0000 In this paper, we proposed a new Deep Learning-based approach for system failure prevention. We employed the Long Short-Term Memory (LSTM) network for error detection and mitigation. Error detection is achieved using the prediction approach. The LSTM models are mitigating the detected errors with computed predictions only when they were subject to the imminent failure (i.e., exceeded the aforementioned thresholds). To compare our approach with previous findings, we trained two LSTM models on angular position and angular velocity signals. For evaluation, we performed fault injection experiments with varying fault effect parameters. The ‘Sensor freeze’ fault was injected into the angular position sensor, and the ‘Stuck-at 0’ fault was injected into angular velocity sensor. The presented Deep Learning-based approach prevented system failure even when the injected faults were substantially exceeding thresholds. In addition, reasoning for data access point choice has been evaluated. We compared two options: (i) the input data for LSTM is provided from the sensor output and (ii) from the controller output. In the paper, the pros and cons for both options are presented.\u0000 We deployed the trained LSTM models on an Edge Tensor Processing Unit. For that, the models have been quantized, i.e. all the 32-bit floating-point numbers (such as weights and activation outputs) were converted to the nearest 8-bit fixed-point numbers and converted to the TensorFlow Lite models. The Coral USB Accelerator was coupled with a Raspberry Pi 4B for signal processing. The result proves the feasibility of the proposed method. Because the LSTM models were converted to the 8bit integer TensorFlow Lite models, it allowed firm real-time error mitigation. Furthermore, the light weight of the system and minimal power consumption allows its integration into wearable robotic systems.","PeriodicalId":146533,"journal":{"name":"Volume 13: Safety Engineering, Risk, and Reliability Analysis; Research Posters","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128729865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}