The Plateau-Rayleigh instability causes the fragmentation of a liquid ligament into smaller droplets. In this study a numerical study of this phenomenon based on a single relaxation time (SRT) pseudo-potential lattice Boltzmann method (LBM) is proposed. If systematically analysed, this test case allows to design appropriate parameters sets to deal with engineering applications involving the hydrodynamics of a jet. Grid convergence simulations are performed in the limit where the interface thickness is asymptotically smaller than the characteristic size of the ligament. These simulations show a neat asymptotic behaviour, possibly related to the convergence of LBM diffuse-interface physics to sharp interface hydrodynamics.
{"title":"Ligament break-up simulation through pseudo-potential Lattice Boltzmann Method","authors":"D. Chiappini, X. Xue, G. Falcucci, M. Sbragaglia","doi":"10.1063/1.5044006","DOIUrl":"https://doi.org/10.1063/1.5044006","url":null,"abstract":"The Plateau-Rayleigh instability causes the fragmentation of a liquid ligament into smaller droplets. In this study a numerical study of this phenomenon based on a single relaxation time (SRT) pseudo-potential lattice Boltzmann method (LBM) is proposed. If systematically analysed, this test case allows to design appropriate parameters sets to deal with engineering applications involving the hydrodynamics of a jet. Grid convergence simulations are performed in the limit where the interface thickness is asymptotically smaller than the characteristic size of the ligament. These simulations show a neat asymptotic behaviour, possibly related to the convergence of LBM diffuse-interface physics to sharp interface hydrodynamics.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81360933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tensor-ring decomposition of tensors plays a key role in various applications of tensor network representation in physics as well as in other fields. In most heuristic algorithms for the tensor-ring decomposition, one encounters the problem of local-minima trapping. Particularly, the minima related to the topological structure in the correlation are hard to escape. Therefore, identification of the correlation structure, somewhat analogous to finding matching ends of entangled strings, is the task of central importance. We show how this problem naturally arises in physical applications, and present a strategy for winning this string-pull game.
{"title":"Tensor-Ring Decomposition with Index-Splitting","authors":"Hyun-Yong Lee, N. Kawashima","doi":"10.7566/JPSJ.89.054003","DOIUrl":"https://doi.org/10.7566/JPSJ.89.054003","url":null,"abstract":"Tensor-ring decomposition of tensors plays a key role in various applications of tensor network representation in physics as well as in other fields. In most heuristic algorithms for the tensor-ring decomposition, one encounters the problem of local-minima trapping. Particularly, the minima related to the topological structure in the correlation are hard to escape. Therefore, identification of the correlation structure, somewhat analogous to finding matching ends of entangled strings, is the task of central importance. We show how this problem naturally arises in physical applications, and present a strategy for winning this string-pull game.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91531969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.4208/cicp.oa-2017-0171
Lorenzo Siddi, E. Cazzola, G. Lapenta
This work presents a set of preconditioning strategies able to significantly accelerate the performance of fully implicit energy-conserving Particle-in-Cell methods to a level that becomes competitive with semi-implicit methods. We compare three different preconditioners. We consider three methods and compare them with a straight unpreconditioned Jacobian Free Newton Krylov (JFNK) implementation. The first two focus, respectively, on improving the handling of particles (particle hiding) or fields (field hiding) within the JFNK iteration. The third uses the field hiding preconditioner within a direct Newton iteration where a Schwarz-decomposed Jacobian is computed analytically. Clearly, field hiding used with JFNK or with the direct Newton-Schwarz (DNS) method outperforms all method. We compare these implementations with a recent semi-implicit energy conserving scheme. Fully implicit methods are still lag behind in cost per cycle but not by a large margin when proper preconditioning is used. However, for exact energy conservation, preconditioned fully implicit methods are significantly easier to implement compared with semi-implicit methods and can be extended to fully relativistic physics.
{"title":"Comparison of Preconditioning Strategies in Energy Conserving Implicit Particle in Cell Methods","authors":"Lorenzo Siddi, E. Cazzola, G. Lapenta","doi":"10.4208/cicp.oa-2017-0171","DOIUrl":"https://doi.org/10.4208/cicp.oa-2017-0171","url":null,"abstract":"This work presents a set of preconditioning strategies able to significantly accelerate the performance of fully implicit energy-conserving Particle-in-Cell methods to a level that becomes competitive with semi-implicit methods. We compare three different preconditioners. We consider three methods and compare them with a straight unpreconditioned Jacobian Free Newton Krylov (JFNK) implementation. The first two focus, respectively, on improving the handling of particles (particle hiding) or fields (field hiding) within the JFNK iteration. The third uses the field hiding preconditioner within a direct Newton iteration where a Schwarz-decomposed Jacobian is computed analytically. Clearly, field hiding used with JFNK or with the direct Newton-Schwarz (DNS) method outperforms all method. We compare these implementations with a recent semi-implicit energy conserving scheme. Fully implicit methods are still lag behind in cost per cycle but not by a large margin when proper preconditioning is used. However, for exact energy conservation, preconditioned fully implicit methods are significantly easier to implement compared with semi-implicit methods and can be extended to fully relativistic physics.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76762055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a coupled continuum formulation for the electrostatic, chemical, thermal, mechanical and fluid physics in battery materials. Our treatment is at the particle scale, at which the active particles held together by binders, the porous separator, current collectors and the perfusing electrolyte are explicitly modeled. Starting with the description common to the field, in terms of reaction-transport partial differential equations for ions, variants of the classical Poisson equation for electrostatics, and the heat equation, we introduce solid-fluid interaction to the problem. Our main contribution is to model the electrolyte as an incompressible fluid driven by elastic, thermal and lithium intercalation strains in the active material. Our treatment is in the finite strain setting, and uses the Arbitrary Lagrangian-Eulerian (ALE) framework to account for mechanical coupling of the solid and fluid. We present a detailed computational study of the influence of solid-fluid interaction, intercalation strain magnitude, particle size and initial porosity upon porosity evolution, ion distribution and electrostatic potential fields in the cell. We also present some comparison between the particle scale model and a recent homogenized, electrode-scale model.
{"title":"A multi-physics battery model with particle scale resolution of porosity evolution driven by intercalation strain and electrolyte flow","authors":"Zhenlin Wang, K. Garikipati","doi":"10.1149/2.0141811jes","DOIUrl":"https://doi.org/10.1149/2.0141811jes","url":null,"abstract":"We present a coupled continuum formulation for the electrostatic, chemical, thermal, mechanical and fluid physics in battery materials. Our treatment is at the particle scale, at which the active particles held together by binders, the porous separator, current collectors and the perfusing electrolyte are explicitly modeled. Starting with the description common to the field, in terms of reaction-transport partial differential equations for ions, variants of the classical Poisson equation for electrostatics, and the heat equation, we introduce solid-fluid interaction to the problem. Our main contribution is to model the electrolyte as an incompressible fluid driven by elastic, thermal and lithium intercalation strains in the active material. Our treatment is in the finite strain setting, and uses the Arbitrary Lagrangian-Eulerian (ALE) framework to account for mechanical coupling of the solid and fluid. We present a detailed computational study of the influence of solid-fluid interaction, intercalation strain magnitude, particle size and initial porosity upon porosity evolution, ion distribution and electrostatic potential fields in the cell. We also present some comparison between the particle scale model and a recent homogenized, electrode-scale model.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81415472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Bauerdick, M. Ritter, O. Gutsche, M. Sokoloff, N. Castro, M. Girone, T. Sakuma, P. Elmer, B. Bockelman, E. Sexton-Kennedy, G. Watts, J. Letts, F. Würthwein, C. Vuosalo, J. Pivarski, D. Katz, R. Bianchi, K. Cranmer, R. Gardner, S. McKee, B. Hegner, E. Rodrigues, D. Lange, C. Paus, J. Hernández, K. Pedro, B. Jayatilaka, L. Kreczko
At the heart of experimental high energy physics (HEP) is the development of facilities and instrumentation that provide sensitivity to new phenomena. Our understanding of nature at its most fundamental level is advanced through the analysis and interpretation of data from sophisticated detectors in HEP experiments. The goal of data analysis systems is to realize the maximum possible scientific potential of the data within the constraints of computing and human resources in the least time. To achieve this goal, future analysis systems should empower physicists to access the data with a high level of interactivity, reproducibility and throughput capability. As part of the HEP Software Foundation Community White Paper process, a working group on Data Analysis and Interpretation was formed to assess the challenges and opportunities in HEP data analysis and develop a roadmap for activities in this area over the next decade. In this report, the key findings and recommendations of the Data Analysis and Interpretation Working Group are presented.
{"title":"arXiv : HEP Software Foundation Community White Paper Working Group - Data Analysis and Interpretation","authors":"L. Bauerdick, M. Ritter, O. Gutsche, M. Sokoloff, N. Castro, M. Girone, T. Sakuma, P. Elmer, B. Bockelman, E. Sexton-Kennedy, G. Watts, J. Letts, F. Würthwein, C. Vuosalo, J. Pivarski, D. Katz, R. Bianchi, K. Cranmer, R. Gardner, S. McKee, B. Hegner, E. Rodrigues, D. Lange, C. Paus, J. Hernández, K. Pedro, B. Jayatilaka, L. Kreczko","doi":"10.2172/1436702","DOIUrl":"https://doi.org/10.2172/1436702","url":null,"abstract":"At the heart of experimental high energy physics (HEP) is the development of facilities and instrumentation that provide sensitivity to new phenomena. Our understanding of nature at its most fundamental level is advanced through the analysis and interpretation of data from sophisticated detectors in HEP experiments. The goal of data analysis systems is to realize the maximum possible scientific potential of the data within the constraints of computing and human resources in the least time. To achieve this goal, future analysis systems should empower physicists to access the data with a high level of interactivity, reproducibility and throughput capability. As part of the HEP Software Foundation Community White Paper process, a working group on Data Analysis and Interpretation was formed to assess the challenges and opportunities in HEP data analysis and develop a roadmap for activities in this area over the next decade. In this report, the key findings and recommendations of the Data Analysis and Interpretation Working Group are presented.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88080774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-31DOI: 10.1504/IJNEST.2018.092604
E. D. L. Cruz-S'anchez, J. Klapp, E. Mayoral-Villa, R. Gonz'alez-Gal'an, A. M. G'omez-Torres, C. E. Alvarado-Rodr'iguez
The use of computer simulations techniques is an advantageous tool in order to evaluate and select the most appropriated site for radionuclides confinement. Modelling different scenarios allow to take decisions about which is the most safety place for the final repository. In this work, a bidimensional numerical simulation model for the analysis of dispersion of contaminants trough a saturated porous media using finite element method (FEM), was applied to study the transport of radioisotopes in a temporary nuclear repository localized in the Vadose Zone at Pena Blanca, Mexico. The 2D model used consider the Darcy's law for calculating the velocity field, which is the input data for in a second computation to solve the mass transport equation. Taking into account radionuclides decay the transport of long lived U-series daughters such as ${}^{238}!text{U}$, ${}^{234}!text{U}$, and ${}^{230}!text{Th}$ is evaluated. The model was validated using experimental data reported in the literature obtaining good agreement between the numerical results and the available experimental data. The simulations show preferential routes that the contaminant plume follows over time. The radionuclide flow is highly irregular and it is influenced by failures in the area and its interactions in the fluid-solid matrix. The resulting radionuclide concentration distribution is as expected. The most important result of this work is the development of a validated model to describe the migration of radionuclides in saturated porous media with some fractures.
{"title":"Numerical simulation of a temporary repository of radioactive material","authors":"E. D. L. Cruz-S'anchez, J. Klapp, E. Mayoral-Villa, R. Gonz'alez-Gal'an, A. M. G'omez-Torres, C. E. Alvarado-Rodr'iguez","doi":"10.1504/IJNEST.2018.092604","DOIUrl":"https://doi.org/10.1504/IJNEST.2018.092604","url":null,"abstract":"The use of computer simulations techniques is an advantageous tool in order to evaluate and select the most appropriated site for radionuclides confinement. Modelling different scenarios allow to take decisions about which is the most safety place for the final repository. In this work, a bidimensional numerical simulation model for the analysis of dispersion of contaminants trough a saturated porous media using finite element method (FEM), was applied to study the transport of radioisotopes in a temporary nuclear repository localized in the Vadose Zone at Pena Blanca, Mexico. The 2D model used consider the Darcy's law for calculating the velocity field, which is the input data for in a second computation to solve the mass transport equation. Taking into account radionuclides decay the transport of long lived U-series daughters such as ${}^{238}!text{U}$, ${}^{234}!text{U}$, and ${}^{230}!text{Th}$ is evaluated. The model was validated using experimental data reported in the literature obtaining good agreement between the numerical results and the available experimental data. The simulations show preferential routes that the contaminant plume follows over time. The radionuclide flow is highly irregular and it is influenced by failures in the area and its interactions in the fluid-solid matrix. The resulting radionuclide concentration distribution is as expected. The most important result of this work is the development of a validated model to describe the migration of radionuclides in saturated porous media with some fractures.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73925680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laser ablation of gold irradiated through the transparent water is studied. We follow dynamics of gold expansion into the water along very long (up to 200 ns) time interval. This is significant because namely at these late times pressure at a contact boundary between gold (Au) and water decreases down to the saturation pressure of gold. Thus the saturation pressure begins to influence dynamics near the contact. The inertia of displaced water decelerates the contact. In the reference frame connected with the contact, the deceleration is equivalent to the free fall acceleration in a gravity field. Such conditions are favorable for the development of Rayleigh-Taylor instability (RTI) because heavy fluid (gold) is placed above the light one (water) in a gravity field. We extract the increment of RTI from 2T-HD 1D runs. Surface tension and especially viscosity significantly dump the RTI gain during deceleration. Atomistic simulation with Molecular Dynamics method combined with Monte-Carlo method (MD-MC) for large electron heat conduction in gold is performed to gain a clear insight into the underlying mechanisms. MD-MC runs show that significant amplification of surface perturbations takes place. These perturbations start just from thermal fluctuations and the noise produced by bombardment of the atmosphere by fragments of foam. The perturbations achieve amplification enough to separate the droplets from the RTI jets of gold. Thus the gold droplets fall into the water.
{"title":"Laser Ablation of Gold into Water: near Critical Point Phenomena and Hydrodynamic Instability","authors":"N. Inogamov, V. Zhakhovsky, V. Khokhlov","doi":"10.1063/1.5045043","DOIUrl":"https://doi.org/10.1063/1.5045043","url":null,"abstract":"Laser ablation of gold irradiated through the transparent water is studied. \u0000We follow dynamics of gold expansion into the water along very long (up to 200 ns) time interval. This is significant because namely at these late times pressure at a contact boundary between gold (Au) and water decreases down to the saturation pressure of gold. \u0000Thus the saturation pressure begins to influence dynamics near the contact. \u0000The inertia of displaced water decelerates the contact. \u0000In the reference frame connected with the contact, the deceleration is equivalent to the free fall acceleration in a gravity field. \u0000Such conditions are favorable for the development of Rayleigh-Taylor instability (RTI) because heavy fluid (gold) is placed above the light one (water) in a gravity field. \u0000We extract the increment of RTI from 2T-HD 1D runs. \u0000Surface tension and especially viscosity significantly dump the RTI gain during deceleration. Atomistic simulation with Molecular Dynamics method combined with Monte-Carlo method (MD-MC) for large electron heat conduction in gold is performed to gain a clear insight into the underlying mechanisms. MD-MC runs show that significant amplification of surface perturbations takes place. \u0000These perturbations start just from thermal fluctuations and the noise produced by bombardment of the atmosphere by fragments of foam. \u0000The perturbations achieve amplification enough to separate the droplets from the RTI jets of gold. Thus the gold droplets fall into the water.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90552940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Apostolakis, B. Nachman, S. Roiser, A. Lyon, K. Pedro, K. Herner, S. Sekmen, D. Konstantinov, X. Qian, L. Welty-Rieger, S. Easo, S. Vallecorsa, E. Snider, J. D. Chapman, C. Zhang, H. Wenzel, L. Fields, B. Siddi, M. Gheata, J. Raaf, Michela Paganini, Ivantchenko, R. Mount, G. Cosmo, M. Asai, S. Farrell, R. Cenci, J. Yarba, P. Canal, F. Hariri, A. Norman, S. Wenzel, A. Gheata, R. Hatcher, M. Verderi, I. Osborne, B. Viren, P. Mato, S. Banerjee, W. Pokorski, D. Wright, P. Lebrun, T. Yang, G. Corti, A. Dotti, M. Kirby, J. Mousseau, Riccardo Bianchi, Z. Marshall, M. Hildreth, A. Ribon, M. Novak, M. Mooney, L. Oliveira, M. Rama, K. Genser, R. Kutschke, S. Jun, G. Lima, D. Ruterbories, T. Junk
A working group on detector simulation was formed as part of the high-energy physics (HEP) Software Foundation's initiative to prepare a Community White Paper that describes the main software challenges and opportunities to be faced in the HEP field over the next decade. The working group met over a period of several months in order to review the current status of the Full and Fast simulation applications of HEP experiments and the improvements that will need to be made in order to meet the goals of future HEP experimental programmes. The scope of the topics covered includes the main components of a HEP simulation application, such as MC truth handling, geometry modeling, particle propagation in materials and fields, physics modeling of the interactions of particles with matter, the treatment of pileup and other backgrounds, as well as signal processing and digitisation. The resulting work programme described in this document focuses on the need to improve both the software performance and the physics of detector simulation. The goals are to increase the accuracy of the physics models and expand their applicability to future physics programmes, while achieving large factors in computing performance gains consistent with projections on available computing resources.
{"title":"arXiv : HEP Software Foundation Community White Paper Working Group - Detector Simulation","authors":"J. Apostolakis, B. Nachman, S. Roiser, A. Lyon, K. Pedro, K. Herner, S. Sekmen, D. Konstantinov, X. Qian, L. Welty-Rieger, S. Easo, S. Vallecorsa, E. Snider, J. D. Chapman, C. Zhang, H. Wenzel, L. Fields, B. Siddi, M. Gheata, J. Raaf, Michela Paganini, Ivantchenko, R. Mount, G. Cosmo, M. Asai, S. Farrell, R. Cenci, J. Yarba, P. Canal, F. Hariri, A. Norman, S. Wenzel, A. Gheata, R. Hatcher, M. Verderi, I. Osborne, B. Viren, P. Mato, S. Banerjee, W. Pokorski, D. Wright, P. Lebrun, T. Yang, G. Corti, A. Dotti, M. Kirby, J. Mousseau, Riccardo Bianchi, Z. Marshall, M. Hildreth, A. Ribon, M. Novak, M. Mooney, L. Oliveira, M. Rama, K. Genser, R. Kutschke, S. Jun, G. Lima, D. Ruterbories, T. Junk","doi":"10.2172/1437300","DOIUrl":"https://doi.org/10.2172/1437300","url":null,"abstract":"A working group on detector simulation was formed as part of the high-energy physics (HEP) Software Foundation's initiative to prepare a Community White Paper that describes the main software challenges and opportunities to be faced in the HEP field over the next decade. The working group met over a period of several months in order to review the current status of the Full and Fast simulation applications of HEP experiments and the improvements that will need to be made in order to meet the goals of future HEP experimental programmes. The scope of the topics covered includes the main components of a HEP simulation application, such as MC truth handling, geometry modeling, particle propagation in materials and fields, physics modeling of the interactions of particles with matter, the treatment of pileup and other backgrounds, as well as signal processing and digitisation. The resulting work programme described in this document focuses on the need to improve both the software performance and the physics of detector simulation. The goals are to increase the accuracy of the physics models and expand their applicability to future physics programmes, while achieving large factors in computing performance gains consistent with projections on available computing resources.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78653389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-07DOI: 10.1103/PhysRevMaterials.2.053802
T. Swinburne, D. Perez
A massively parallel method to build large transition rate matrices from temperature accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature ac- celeration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.
{"title":"Self-optimized construction of transition rate matrices from accelerated atomistic simulations with Bayesian uncertainty quantification","authors":"T. Swinburne, D. Perez","doi":"10.1103/PhysRevMaterials.2.053802","DOIUrl":"https://doi.org/10.1103/PhysRevMaterials.2.053802","url":null,"abstract":"A massively parallel method to build large transition rate matrices from temperature accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature ac- celeration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80827093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-02DOI: 10.1615/INT.J.UNCERTAINTYQUANTIFICATION.2018025837
L. Bruder, P. Koutsourelakis
The present paper is motivated by one of the most fundamental challenges in inverse problems, that of quantifying model discrepancies and errors. While significant strides have been made in calibrating model parameters, the overwhelming majority of pertinent methods is based on the assumption of a perfect model. Motivated by problems in solid mechanics which, as all problems in continuum thermodynamics, are described by conservation laws and phenomenological constitutive closures, we argue that in order to quantify model uncertainty in a physically meaningful manner, one should break open the black-box forward model. In particular we propose formulating an undirected probabilistic model that explicitly accounts for the governing equations and their validity. This recasts the solution of both forward and inverse problems as probabilistic inference tasks where the problem's state variables should not only be compatible with the data but also with the governing equations as well. Even though the probability densities involved do not contain any black-box terms, they live in much higher-dimensional spaces. In combination with the intractability of the normalization constant of the undirected model employed, this poses significant challenges which we propose to address with a linearly-scaling, double-layer of Stochastic Variational Inference. We demonstrate the capabilities and efficacy of the proposed model in synthetic forward and inverse problems (with and without model error) in elastography.
{"title":"Beyond black-boxes in Bayesian inverse problems and model validation: applications in solid mechanics of elastography","authors":"L. Bruder, P. Koutsourelakis","doi":"10.1615/INT.J.UNCERTAINTYQUANTIFICATION.2018025837","DOIUrl":"https://doi.org/10.1615/INT.J.UNCERTAINTYQUANTIFICATION.2018025837","url":null,"abstract":"The present paper is motivated by one of the most fundamental challenges in inverse problems, that of quantifying model discrepancies and errors. While significant strides have been made in calibrating model parameters, the overwhelming majority of pertinent methods is based on the assumption of a perfect model. Motivated by problems in solid mechanics which, as all problems in continuum thermodynamics, are described by conservation laws and phenomenological constitutive closures, we argue that in order to quantify model uncertainty in a physically meaningful manner, one should break open the black-box forward model. In particular we propose formulating an undirected probabilistic model that explicitly accounts for the governing equations and their validity. This recasts the solution of both forward and inverse problems as probabilistic inference tasks where the problem's state variables should not only be compatible with the data but also with the governing equations as well. Even though the probability densities involved do not contain any black-box terms, they live in much higher-dimensional spaces. In combination with the intractability of the normalization constant of the undirected model employed, this poses significant challenges which we propose to address with a linearly-scaling, double-layer of Stochastic Variational Inference. We demonstrate the capabilities and efficacy of the proposed model in synthetic forward and inverse problems (with and without model error) in elastography.","PeriodicalId":8424,"journal":{"name":"arXiv: Computational Physics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78670934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}