Max von Danwitz, Jacopo Bonari, Philip Franz, Lisa Kühn, Marco Mattuschka, Alexander Popp
A digital twin framework for rapid predictions of atmospheric contaminant dispersion is developed to support informed decision making in emergency situations. In an offline preparation phase, the geometry of a built environment is discretized with a finite element (FEM) mesh and a reduced-order model (ROM) of the steady-state incompressible Navier-Stokes equations is constructed for various wind conditions. Subsequently, the ROM provides a fast wind field estimate based on the current wind speed during the online phase. To support crisis management, several methodological building blocks are combined. Automatic FEM meshing of built environments and numerical flow solver capabilities enable fast forward-simulations of contaminant dispersion using the advection-diffusion equation as transport model. Further methods are integrated in the framework to address inverse problems such as contaminant source localization based on sparse concentration measurements. Additionally, the contaminant dispersion model is coupled with a continuum-based pedestrian crowd model to derive fast and safe evacuation routes for people seeking protection during contaminant dispersion emergencies. The interplay of these methods is demonstrated in two critical infrastructure protection (CIP) test cases. Based on simulated real world interaction (measurements, communication), this article demonstrates a full Measurement-Inversion-Prediction-Steering (MIPS) cycle including a Bayesian formulation of the inverse problem.
{"title":"Contaminant Dispersion Simulation in a Digital Twin Framework for Critical Infrastructure Protection","authors":"Max von Danwitz, Jacopo Bonari, Philip Franz, Lisa Kühn, Marco Mattuschka, Alexander Popp","doi":"arxiv-2409.01253","DOIUrl":"https://doi.org/arxiv-2409.01253","url":null,"abstract":"A digital twin framework for rapid predictions of atmospheric contaminant\u0000dispersion is developed to support informed decision making in emergency\u0000situations. In an offline preparation phase, the geometry of a built\u0000environment is discretized with a finite element (FEM) mesh and a reduced-order\u0000model (ROM) of the steady-state incompressible Navier-Stokes equations is\u0000constructed for various wind conditions. Subsequently, the ROM provides a fast\u0000wind field estimate based on the current wind speed during the online phase. To\u0000support crisis management, several methodological building blocks are combined.\u0000Automatic FEM meshing of built environments and numerical flow solver\u0000capabilities enable fast forward-simulations of contaminant dispersion using\u0000the advection-diffusion equation as transport model. Further methods are\u0000integrated in the framework to address inverse problems such as contaminant\u0000source localization based on sparse concentration measurements. Additionally,\u0000the contaminant dispersion model is coupled with a continuum-based pedestrian\u0000crowd model to derive fast and safe evacuation routes for people seeking\u0000protection during contaminant dispersion emergencies. The interplay of these\u0000methods is demonstrated in two critical infrastructure protection (CIP) test\u0000cases. Based on simulated real world interaction (measurements, communication),\u0000this article demonstrates a full Measurement-Inversion-Prediction-Steering\u0000(MIPS) cycle including a Bayesian formulation of the inverse problem.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fine-grained simulation of floor construction processes is essential for supporting lean management and the integration of information technology. However, existing research does not adequately address the on-site decision-making of constructors in selecting tasks and determining their sequence within the entire construction process. Moreover, decision-making frameworks from computer science and robotics are not directly applicable to construction scenarios. To facilitate intelligent simulation in construction, this study introduces the Construction Markov Decision Process (CMDP). The primary contribution of this CMDP framework lies in its construction knowledge in decision, observation modifications and policy design, enabling agents to perceive the construction state and follow policy guidance to evaluate and reach various range of targets for optimizing the planning of construction activities. The CMDP is developed on the Unity platform, utilizing a two-stage training approach with the multi-agent proximal policy optimization algorithm. A case study demonstrates the effectiveness of this framework: the low-level policy successfully simulates the construction process in continuous space, facilitating policy testing and training focused on reducing conflicts and blockages among crews; and the high-level policy improving the spatio-temporal planning of construction activities, generating construction patterns in distinct phases, leading to the discovery of new construction insights.
{"title":"Multiagent Reinforcement Learning Enhanced Decision-making of Crew Agents During Floor Construction Process","authors":"Bin Yang, Boda Liu, Yilong Han, Xin Meng, Yifan Wang, Hansi Yang, Jianzhuang Xia","doi":"arxiv-2409.01060","DOIUrl":"https://doi.org/arxiv-2409.01060","url":null,"abstract":"Fine-grained simulation of floor construction processes is essential for\u0000supporting lean management and the integration of information technology.\u0000However, existing research does not adequately address the on-site\u0000decision-making of constructors in selecting tasks and determining their\u0000sequence within the entire construction process. Moreover, decision-making\u0000frameworks from computer science and robotics are not directly applicable to\u0000construction scenarios. To facilitate intelligent simulation in construction,\u0000this study introduces the Construction Markov Decision Process (CMDP). The\u0000primary contribution of this CMDP framework lies in its construction knowledge\u0000in decision, observation modifications and policy design, enabling agents to\u0000perceive the construction state and follow policy guidance to evaluate and\u0000reach various range of targets for optimizing the planning of construction\u0000activities. The CMDP is developed on the Unity platform, utilizing a two-stage\u0000training approach with the multi-agent proximal policy optimization algorithm.\u0000A case study demonstrates the effectiveness of this framework: the low-level\u0000policy successfully simulates the construction process in continuous space,\u0000facilitating policy testing and training focused on reducing conflicts and\u0000blockages among crews; and the high-level policy improving the spatio-temporal\u0000planning of construction activities, generating construction patterns in\u0000distinct phases, leading to the discovery of new construction insights.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haiyao Cao, Jinan Zou, Yuhang Liu, Zhen Zhang, Ehsan Abbasnejad, Anton van den Hengel, Javen Qinfeng Shi
Accurately predicting stock returns is crucial for effective portfolio management. However, existing methods often overlook a fundamental issue in the market, namely, distribution shifts, making them less practical for predicting future markets or newly listed stocks. This study introduces a novel approach to address this challenge by focusing on the acquisition of invariant features across various environments, thereby enhancing robustness against distribution shifts. Specifically, we present InvariantStock, a designed learning framework comprising two key modules: an environment-aware prediction module and an environment-agnostic module. Through the designed learning of these two modules, the proposed method can learn invariant features across different environments in a straightforward manner, significantly improving its ability to handle distribution shifts in diverse market settings. Our results demonstrate that the proposed InvariantStock not only delivers robust and accurate predictions but also outperforms existing baseline methods in both prediction tasks and backtesting within the dynamically changing markets of China and the United States.
{"title":"InvariantStock: Learning Invariant Features for Mastering the Shifting Market","authors":"Haiyao Cao, Jinan Zou, Yuhang Liu, Zhen Zhang, Ehsan Abbasnejad, Anton van den Hengel, Javen Qinfeng Shi","doi":"arxiv-2409.00671","DOIUrl":"https://doi.org/arxiv-2409.00671","url":null,"abstract":"Accurately predicting stock returns is crucial for effective portfolio\u0000management. However, existing methods often overlook a fundamental issue in the\u0000market, namely, distribution shifts, making them less practical for predicting\u0000future markets or newly listed stocks. This study introduces a novel approach\u0000to address this challenge by focusing on the acquisition of invariant features\u0000across various environments, thereby enhancing robustness against distribution\u0000shifts. Specifically, we present InvariantStock, a designed learning framework\u0000comprising two key modules: an environment-aware prediction module and an\u0000environment-agnostic module. Through the designed learning of these two\u0000modules, the proposed method can learn invariant features across different\u0000environments in a straightforward manner, significantly improving its ability\u0000to handle distribution shifts in diverse market settings. Our results\u0000demonstrate that the proposed InvariantStock not only delivers robust and\u0000accurate predictions but also outperforms existing baseline methods in both\u0000prediction tasks and backtesting within the dynamically changing markets of\u0000China and the United States.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiachen Guo, Chanwook Park, Xiaoyu Xie, Zhongsheng Sang, Gregory J. Wagner, Wing Kam Liu
A common trend in simulation-driven engineering applications is the ever-increasing size and complexity of the problem, where classical numerical methods typically suffer from significant computational time and huge memory cost. Methods based on artificial intelligence have been extensively investigated to accelerate partial differential equations (PDE) solvers using data-driven surrogates. However, most data-driven surrogates require an extremely large amount of training data. In this paper, we propose the Convolutional Hierarchical Deep Learning Neural Network-Tensor Decomposition (C-HiDeNN-TD) method, which can directly obtain surrogate models by solving large-scale space-time PDE without generating any offline training data. We compare the performance of the proposed method against classical numerical methods for extremely large-scale systems.
{"title":"Convolutional Hierarchical Deep Learning Neural Networks-Tensor Decomposition (C-HiDeNN-TD): a scalable surrogate modeling approach for large-scale physical systems","authors":"Jiachen Guo, Chanwook Park, Xiaoyu Xie, Zhongsheng Sang, Gregory J. Wagner, Wing Kam Liu","doi":"arxiv-2409.00329","DOIUrl":"https://doi.org/arxiv-2409.00329","url":null,"abstract":"A common trend in simulation-driven engineering applications is the\u0000ever-increasing size and complexity of the problem, where classical numerical\u0000methods typically suffer from significant computational time and huge memory\u0000cost. Methods based on artificial intelligence have been extensively\u0000investigated to accelerate partial differential equations (PDE) solvers using\u0000data-driven surrogates. However, most data-driven surrogates require an\u0000extremely large amount of training data. In this paper, we propose the\u0000Convolutional Hierarchical Deep Learning Neural Network-Tensor Decomposition\u0000(C-HiDeNN-TD) method, which can directly obtain surrogate models by solving\u0000large-scale space-time PDE without generating any offline training data. We\u0000compare the performance of the proposed method against classical numerical\u0000methods for extremely large-scale systems.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucas Brunel, Mathieu Balesdent, Loïc Brevault, Rodolphe Le Riche, Bruno Sudret
Multi-fidelity surrogate models combining dimensionality reduction and an intermediate surrogate in the reduced space allow a cost-effective emulation of simulators with functional outputs. The surrogate is an input-output mapping learned from a limited number of simulator evaluations. This computational efficiency makes surrogates commonly used for many-query tasks. Diverse methods for building them have been proposed in the literature, but they have only been partially compared. This paper introduces a unified framework encompassing the different surrogate families, followed by a methodological comparison and the exposition of practical considerations. More than a dozen of existing multi-fidelity surrogates have been implemented under the unified framework and evaluated on a set of benchmark problems. Based on the results, guidelines and recommendations are proposed regarding multi-fidelity surrogates with functional outputs. Our study shows that most multi-fidelity surrogates outperform their tested single-fidelity counterparts under the considered settings. But no particular surrogate is performing better on every test case. Therefore, the selection of a surrogate should consider the specific properties of the emulated functions, in particular the correlation between the low- and high-fidelity simulators, the size of the training set, the local nonlinear variations in the residual fields, and the size of the training datasets.
{"title":"A survey on multi-fidelity surrogates for simulators with functional outputs: unified framework and benchmark","authors":"Lucas Brunel, Mathieu Balesdent, Loïc Brevault, Rodolphe Le Riche, Bruno Sudret","doi":"arxiv-2408.17075","DOIUrl":"https://doi.org/arxiv-2408.17075","url":null,"abstract":"Multi-fidelity surrogate models combining dimensionality reduction and an\u0000intermediate surrogate in the reduced space allow a cost-effective emulation of\u0000simulators with functional outputs. The surrogate is an input-output mapping\u0000learned from a limited number of simulator evaluations. This computational\u0000efficiency makes surrogates commonly used for many-query tasks. Diverse methods\u0000for building them have been proposed in the literature, but they have only been\u0000partially compared. This paper introduces a unified framework encompassing the different\u0000surrogate families, followed by a methodological comparison and the exposition\u0000of practical considerations. More than a dozen of existing multi-fidelity\u0000surrogates have been implemented under the unified framework and evaluated on a\u0000set of benchmark problems. Based on the results, guidelines and recommendations\u0000are proposed regarding multi-fidelity surrogates with functional outputs. Our study shows that most multi-fidelity surrogates outperform their tested\u0000single-fidelity counterparts under the considered settings. But no particular\u0000surrogate is performing better on every test case. Therefore, the selection of\u0000a surrogate should consider the specific properties of the emulated functions,\u0000in particular the correlation between the low- and high-fidelity simulators,\u0000the size of the training set, the local nonlinear variations in the residual\u0000fields, and the size of the training datasets.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giulia Pederzani, Andrii Grytsan, Alfons G. Hoekstra, Anne M. Robertson, Paul N. Watton
Cerebral vasospasm, a prolonged constriction of cerebral arteries, is the first cause of morbidity and mortality for patients who survive hospitalisation after aneurysmal subarachnoid haemorrhage. The recent finding that stent-retrievers can successfully treat the disease has challenged the viewpoint that damage to the extracellular matrix is necessary. We apply a 3D finite element rate-based constrained mixture model (rb-CMM) to simulate vasospasm, remodelling and treatment with stents. The artery is modelled as a thick-walled fibre-reinforced constrained mixture subject to physiological pressure and axial stretch. The model accounts for distributions of collagen fibre homeostatic stretches, VSMC active response, remodelling and damage. After simulating vasospasm and subsequent remodelling of the artery to a new homeostatic state, we simulate treatment with commonly available stent-retrievers. We perform a parameter study to examine how arterial diameter and thickness affect the success of stent treatment. The model predictions on the pressure required to mechanically resolve the constriction are consistent with stent-retrievers. In agreement with clinical observations, our model predicts that stent-retrievers tend to be effective in arteries of up to 3mm diameter, but fail in larger ones. Variations in arterial wall thickness significantly affect stent pressure requirements. We have developed a novel rb-CMM that accounts for VSMC active response, remodelling and damage. Consistently with clinical observations, simulations predict that stent-retrievers can mechanically resolve vasospasm. Moreover, accounting for a patient's arterial properties is important for predicting likelihood of stent success. This in silico tool has the potential to support clinical decision-making and guide the development and evaluation of dedicated stents for personalised treatment of vasospasm.
{"title":"Modelling Growth, Remodelling and Damage of a Thick-walled Fibre-reinforced Artery with Active Response: Application to Cerebral Vasospasm and Treatment","authors":"Giulia Pederzani, Andrii Grytsan, Alfons G. Hoekstra, Anne M. Robertson, Paul N. Watton","doi":"arxiv-2408.17206","DOIUrl":"https://doi.org/arxiv-2408.17206","url":null,"abstract":"Cerebral vasospasm, a prolonged constriction of cerebral arteries, is the\u0000first cause of morbidity and mortality for patients who survive hospitalisation\u0000after aneurysmal subarachnoid haemorrhage. The recent finding that\u0000stent-retrievers can successfully treat the disease has challenged the\u0000viewpoint that damage to the extracellular matrix is necessary. We apply a 3D\u0000finite element rate-based constrained mixture model (rb-CMM) to simulate\u0000vasospasm, remodelling and treatment with stents. The artery is modelled as a\u0000thick-walled fibre-reinforced constrained mixture subject to physiological\u0000pressure and axial stretch. The model accounts for distributions of collagen\u0000fibre homeostatic stretches, VSMC active response, remodelling and damage.\u0000After simulating vasospasm and subsequent remodelling of the artery to a new\u0000homeostatic state, we simulate treatment with commonly available\u0000stent-retrievers. We perform a parameter study to examine how arterial diameter\u0000and thickness affect the success of stent treatment. The model predictions on\u0000the pressure required to mechanically resolve the constriction are consistent\u0000with stent-retrievers. In agreement with clinical observations, our model\u0000predicts that stent-retrievers tend to be effective in arteries of up to 3mm\u0000diameter, but fail in larger ones. Variations in arterial wall thickness\u0000significantly affect stent pressure requirements. We have developed a novel\u0000rb-CMM that accounts for VSMC active response, remodelling and damage.\u0000Consistently with clinical observations, simulations predict that\u0000stent-retrievers can mechanically resolve vasospasm. Moreover, accounting for a\u0000patient's arterial properties is important for predicting likelihood of stent\u0000success. This in silico tool has the potential to support clinical\u0000decision-making and guide the development and evaluation of dedicated stents\u0000for personalised treatment of vasospasm.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper establishes a data-driven modeling framework for lean Hydrogen (H2)-air reaction rates for the Large Eddy Simulation (LES) of turbulent reactive flows. This is particularly challenging since H2 molecules diffuse much faster than heat, leading to large variations in burning rates, thermodiffusive instabilities at the subfilter scale, and complex turbulence-chemistry interactions. Our data-driven approach leverages a Convolutional Neural Network (CNN), trained to approximate filtered burning rates from emulated LES data. First, five different lean premixed turbulent H2-air flame Direct Numerical Simulations (DNSs) are computed each with a unique global equivalence ratio. Second, DNS snapshots are filtered and downsampled to emulate LES data. Third, a CNN is trained to approximate the filtered burning rates as a function of LES scalar quantities: progress variable, local equivalence ratio and flame thickening due to filtering. Finally, the performances of the CNN model are assessed on test solutions never seen during training. The model retrieves burning rates with very high accuracy. It is also tested on two filter and downsampling parameters and two global equivalence ratios between those used during training. For these interpolation cases, the model approximates burning rates with low error even though the cases were not included in the training dataset. This a priori study shows that the proposed data-driven machine learning framework is able to address the challenge of modeling lean premixed H2-air burning rates. It paves the way for a new modeling paradigm for the simulation of carbon-free hydrogen combustion systems.
本文为湍流反应流的大涡模拟(LES)建立了一个数据驱动的氢气(H2)-空气反应速率建模框架。这尤其具有挑战性,因为 H2 分子的扩散速度远远快于热量的扩散速度,从而导致燃烧速率的巨大变化、亚过滤器尺度的热扩散不稳定性以及湍流与化学的全面相互作用。我们的数据驱动方法利用了经过训练的卷积神经网络(CNN),以近似模拟 LES 数据中的过滤燃烧率。首先,计算五种不同的贫预混湍流 H2-空气火焰直接数值模拟(DNS),每种模拟都具有独特的全局等效比。其次,对 DNS 快照进行过滤和降采样,以模拟 LES 数据。第三,对 CNN 进行训练,以便将过滤后的燃烧率近似为 LES 标量的函数:进度变量、局部等效比和过滤导致的火焰增厚。该模型能非常准确地检索出燃烧率。此外,还对两个滤波和下采样参数以及两个全球等值比进行了测试。对于这些插值情况,模型以较低的误差逼近了燃烧率,尽管这些情况并未包含在训练数据集中。这项先验研究表明,所提出的数据驱动机器学习框架能够解决贫油预混合 H2- 空气燃烧率建模的难题。它为模拟无碳氢气燃烧系统的新建模范例铺平了道路。
{"title":"Hydrogen reaction rate modeling based on convolutional neural network for large eddy simulation","authors":"Quentin Malé, Corentin J Lapeyre, Nicolas Noiray","doi":"arxiv-2408.16709","DOIUrl":"https://doi.org/arxiv-2408.16709","url":null,"abstract":"This paper establishes a data-driven modeling framework for lean Hydrogen\u0000(H2)-air reaction rates for the Large Eddy Simulation (LES) of turbulent\u0000reactive flows. This is particularly challenging since H2 molecules diffuse\u0000much faster than heat, leading to large variations in burning rates,\u0000thermodiffusive instabilities at the subfilter scale, and complex\u0000turbulence-chemistry interactions. Our data-driven approach leverages a\u0000Convolutional Neural Network (CNN), trained to approximate filtered burning\u0000rates from emulated LES data. First, five different lean premixed turbulent\u0000H2-air flame Direct Numerical Simulations (DNSs) are computed each with a\u0000unique global equivalence ratio. Second, DNS snapshots are filtered and\u0000downsampled to emulate LES data. Third, a CNN is trained to approximate the\u0000filtered burning rates as a function of LES scalar quantities: progress\u0000variable, local equivalence ratio and flame thickening due to filtering.\u0000Finally, the performances of the CNN model are assessed on test solutions never\u0000seen during training. The model retrieves burning rates with very high\u0000accuracy. It is also tested on two filter and downsampling parameters and two\u0000global equivalence ratios between those used during training. For these\u0000interpolation cases, the model approximates burning rates with low error even\u0000though the cases were not included in the training dataset. This a priori study\u0000shows that the proposed data-driven machine learning framework is able to\u0000address the challenge of modeling lean premixed H2-air burning rates. It paves\u0000the way for a new modeling paradigm for the simulation of carbon-free hydrogen\u0000combustion systems.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrating experimental data into simulations is crucial for predicting material behaviour, especially in fracture mechanics. Digital Image Correlation (DIC) provides precise displacement measurements, essential for evaluating strain energy release rates and stress intensity factors (SIF) around cracks. Translating DIC data into CAE software like ABAQUS has been challenging. DIC2CAE, a MATLAB-based tool, automates this conversion, enabling accurate simulations. It uses the J-integral method to calculate SIFs and handles complex scenarios without needing specimen geometry or applied loads. DIC2CAE enhances fracture mechanics simulations' reliability, accelerating materials research and development.
{"title":"DIC2CAE: Calculating the stress intensity factors (KI-III) from 2D and stereo displacement fields","authors":"Abdalrhaman Koko","doi":"arxiv-2409.08285","DOIUrl":"https://doi.org/arxiv-2409.08285","url":null,"abstract":"Integrating experimental data into simulations is crucial for predicting\u0000material behaviour, especially in fracture mechanics. Digital Image Correlation\u0000(DIC) provides precise displacement measurements, essential for evaluating\u0000strain energy release rates and stress intensity factors (SIF) around cracks.\u0000Translating DIC data into CAE software like ABAQUS has been challenging.\u0000DIC2CAE, a MATLAB-based tool, automates this conversion, enabling accurate\u0000simulations. It uses the J-integral method to calculate SIFs and handles\u0000complex scenarios without needing specimen geometry or applied loads. DIC2CAE\u0000enhances fracture mechanics simulations' reliability, accelerating materials\u0000research and development.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142249047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fire is a process that generates both light and heat, posing a significant threat to life and infrastructure. Buildings and structures are neither inherently susceptible to fire nor completely fire-resistant; their vulnerability largely depends on the specific causes of the fire, which can stem from natural events or human-induced hazards. High temperatures in structures can lead to severe health risks for those directly affected, discomfort due to smoke, and compromised safety if the structure fails to meet safety standards. Elevated temperatures can also cause significant structural damage, becoming the primary cause of casualties, economic losses, and material damage. This study aims to investigate the thermal and structural behavior of concrete beams when exposed to extreme fire conditions. It examines the effects of different temperatures on plain and reinforced concrete (PCC and RCC, respectively) using finite element method (FEM) simulations. Additionally, the study explores the performance of various concrete grades under severe conditions. The analysis reveals that higher-grade concrete exhibits greater displacement, crack width, stress, and strain but has lower thermal conductivity compared to lower-grade concrete. These elevated temperatures can induce severe stresses in the concrete, leading to expansion, spalling, and the potential failure of the structure. Reinforced concrete, on the other hand, shows lower stress concentrations and minimal strain up to 250{deg}C. These findings contribute to the existing knowledge and support the development of improved fire safety regulations and performance-based design methodologies.
{"title":"When Fire Attacks: How does Concrete Stand up to Heat ?","authors":"Anshu Sharma, Basuraj Bhowmik","doi":"arxiv-2408.15756","DOIUrl":"https://doi.org/arxiv-2408.15756","url":null,"abstract":"Fire is a process that generates both light and heat, posing a significant\u0000threat to life and infrastructure. Buildings and structures are neither\u0000inherently susceptible to fire nor completely fire-resistant; their\u0000vulnerability largely depends on the specific causes of the fire, which can\u0000stem from natural events or human-induced hazards. High temperatures in\u0000structures can lead to severe health risks for those directly affected,\u0000discomfort due to smoke, and compromised safety if the structure fails to meet\u0000safety standards. Elevated temperatures can also cause significant structural\u0000damage, becoming the primary cause of casualties, economic losses, and material\u0000damage. This study aims to investigate the thermal and structural behavior of\u0000concrete beams when exposed to extreme fire conditions. It examines the effects\u0000of different temperatures on plain and reinforced concrete (PCC and RCC,\u0000respectively) using finite element method (FEM) simulations. Additionally, the\u0000study explores the performance of various concrete grades under severe\u0000conditions. The analysis reveals that higher-grade concrete exhibits greater\u0000displacement, crack width, stress, and strain but has lower thermal\u0000conductivity compared to lower-grade concrete. These elevated temperatures can\u0000induce severe stresses in the concrete, leading to expansion, spalling, and the\u0000potential failure of the structure. Reinforced concrete, on the other hand,\u0000shows lower stress concentrations and minimal strain up to 250{deg}C. These\u0000findings contribute to the existing knowledge and support the development of\u0000improved fire safety regulations and performance-based design methodologies.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid computation of electromagnetic (EM) fields across various scenarios has long been a challenge, primarily due to the need for precise geometric models. The emergence of point cloud data offers a potential solution to this issue. However, the lack of electromagnetic simulation algorithms optimized for point-based models remains a significant limitation. In this study, we propose PointEMRay, an innovative shooting and bouncing ray (SBR) framework designed explicitly for point-based geometries. To enable SBR on point clouds, we address two critical challenges: point-ray intersection (PRI) and multiple bounce computation (MBC). For PRI, we propose a screen-based method leveraging deep learning. Initially, we obtain coarse depth maps through ray tube tracing, which are then transformed by a neural network into dense depth maps, normal maps, and intersection masks, collectively referred to as geometric frame buffers (GFBs). For MBC, inspired by simultaneous localization and mapping (SLAM) techniques, we introduce a GFB-assisted approach. This involves aggregating GFBs from various observation angles and integrating them to recover the complete geometry. Subsequently, a ray tracing algorithm is applied to these GFBs to compute the scattering electromagnetic field. Numerical experiments demonstrate the superior performance of PointEMRay in terms of both accuracy and efficiency, including support for real-time simulation. To the best of our knowledge, this study represents the first attempt to develop an SBR framework specifically tailored for point-based models.
{"title":"PointEMRay: A Novel Efficient SBR Framework on Point Based Geometry","authors":"Kaiqiao Yang, Che Liu, Wenming Yu, Tie Jun Cui","doi":"arxiv-2408.15583","DOIUrl":"https://doi.org/arxiv-2408.15583","url":null,"abstract":"The rapid computation of electromagnetic (EM) fields across various scenarios\u0000has long been a challenge, primarily due to the need for precise geometric\u0000models. The emergence of point cloud data offers a potential solution to this\u0000issue. However, the lack of electromagnetic simulation algorithms optimized for\u0000point-based models remains a significant limitation. In this study, we propose\u0000PointEMRay, an innovative shooting and bouncing ray (SBR) framework designed\u0000explicitly for point-based geometries. To enable SBR on point clouds, we\u0000address two critical challenges: point-ray intersection (PRI) and multiple\u0000bounce computation (MBC). For PRI, we propose a screen-based method leveraging\u0000deep learning. Initially, we obtain coarse depth maps through ray tube tracing,\u0000which are then transformed by a neural network into dense depth maps, normal\u0000maps, and intersection masks, collectively referred to as geometric frame\u0000buffers (GFBs). For MBC, inspired by simultaneous localization and mapping\u0000(SLAM) techniques, we introduce a GFB-assisted approach. This involves\u0000aggregating GFBs from various observation angles and integrating them to\u0000recover the complete geometry. Subsequently, a ray tracing algorithm is applied\u0000to these GFBs to compute the scattering electromagnetic field. Numerical\u0000experiments demonstrate the superior performance of PointEMRay in terms of both\u0000accuracy and efficiency, including support for real-time simulation. To the\u0000best of our knowledge, this study represents the first attempt to develop an\u0000SBR framework specifically tailored for point-based models.","PeriodicalId":501309,"journal":{"name":"arXiv - CS - Computational Engineering, Finance, and Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}