We present a versatile and efficient quantum algorithm based on the Lattice Boltzmann method (LBM) approximate solution of the linear advection-diffusion equation (ADE). We emphasize that the LBM approximation modifies the diffusion term of the underlying exact ADE and leads to a modified equation (mADE). Due to its versatility in terms of operator splitting, the proposed quantum LBM algorithm for the mADE provides a building block for future quantum algorithms to solve the linearized Navier-Stokes equation on quantum computers. We split the algorithm into four operations: initialization, collision, streaming, and calculation of the macroscopic quantities. We propose general quantum building blocks for each operator, which adapt intrinsically from the general three-dimensional case to smaller dimensions and apply to arbitrary lattice-velocity sets. Based on (sub-linear) amplitude data encoding, we propose improved initialization and collision operations with reduced complexity and efficient sampling-based simulation. Quantum streaming algorithms are based on previous developments. The proposed quantum algorithm allows for the computation of successive time steps, requiring full state measurement and reinitialization after every time step. It is validated by comparison with a digital implementation and based on analytical solutions in one and two dimensions. Furthermore, we demonstrate the versatility of the quantum algorithm for two cases with non-uniform advection velocities in two and three dimensions. Various velocity sets are considered to further highlight the flexibility of the algorithm. We benchmark our optimized quantum algorithm against previous methods employed in sampling-based quantum simulators. We demonstrate sampling efficiency, with sampling accelerated convergence requiring fewer shots.
We present MaRTIn, an extendable all-in-one package for calculating amplitudes up to two loops in an expansion in external momenta or using the method of infrared rearrangement. Renormalisable and non-renormalisable models can be supplied by the user; an implementation of the Standard Model is included in the package. In this manual, we discuss the scope and functionality of the software, and give instructions of its use.
Data assimilation techniques are often confronted with challenges handling complex high dimensional physical systems, because high precision simulation in complex high dimensional physical systems is computationally expensive and the exact observation functions that can be applied in these systems are difficult to obtain. It prompts growing interest in integrating deep learning models within data assimilation workflows, but current software packages for data assimilation cannot handle deep learning models inside. This study presents a novel Python package seamlessly combining data assimilation with deep neural networks to serve as models for state transition and observation functions. The package, named TorchDA, implements Kalman Filter, Ensemble Kalman Filter (EnKF), 3D Variational (3DVar), and 4D Variational (4DVar) algorithms, allowing flexible algorithm selection based on application requirements. Comprehensive experiments conducted on the Lorenz 63 and a two-dimensional shallow water system demonstrate significantly enhanced performance over standalone model predictions without assimilation. The shallow water analysis validates data assimilation capabilities mapping between different physical quantity spaces in either full space or reduced order space. Overall, this innovative software package enables flexible integration of deep learning representations within data assimilation, conferring a versatile tool to tackle complex high dimensional dynamical systems across scientific domains.
Program Title: TorchDA
CPC Library link to program files: https://doi.org/10.17632/bm5d7xk6gw.1
Developer's repository link: https://github.com/acse-jm122/torchda
Licensing provisions: GNU General Public License version 3
Programming language: Python3
External routines/libraries: Pytorch.
Nature of problem: Deep learning has recently emerged as a potent tool for establishing data-driven predictive and observation functions within data assimilation workflows. Existing data assimilation tools like OpenDA and ADAO are not well-suited for handling predictive and observation models represented by deep neural networks. This gap necessitates the development of a comprehensive package that harmonizes deep learning and data assimilation.
Solution method: This project introduces TorchDA, a novel computational tool based on the PyTorch framework, addressing the challenges posed by predictive and observation functions represented by deep neural networks. It enables users to train their custom neural networks and effortlessly incorporate them into data assimilation processes. This integration facilitates the incorporation of real-time observational data in both full and reduced physical spaces.
We present a hybrid Eulerian-Lagrangian (HEL) Vlasov method for nonlinear resonant wave-particle interactions in weakly inhomogeneous magnetic field. The governing Vlasov equation is derived from a recently proposed resonance tracking Hamiltonian theory. It gives the evolution of the distribution function with a scale-separated Hamiltonian that contains the fast-varying coherent wave-particle interaction and slowly-varying motion about the resonance frame of reference. The hybrid scheme solves the fast-varying phase space evolution on Eulerian grid with an adaptive time step and then advances the slowly-varying dynamics by Lagrangian method along the resonance trajectory. We apply the HEL method to study the frequency chirping of whistler-mode chorus wave in the magnetosphere and the self-consistent simulations reproduce the chirping chorus wave and give high-resolution phase space dynamics of energetic particles at low computational cost. The scale-separated HEL approach could provide additional insights of the wave instabilities and wave-particle nonlinear coherence compared to the conventional Vlasov and particle-in-cell methods.
Prompt Gamma Neutron Analysis Activation is a widely used technique for analyzing materials. This technique defines graphs (reference spectrum collection, or libraries) of spectral intensity as a function of energy (channels) for the elements inserted in a sample. The Monte Carlo Library Least Squares (MCLLS) is the dominant approach in the PGNAA technique. The main difficulties faced in the MCLLS domain are (1) numerical instabilities in the least-squares stage (Library Least Squares (LLS)); (2) overdetermination of the system of equations; (3) linear dependence in the libraries; (4) gamma radiation scattering; (5) high computational costs. The present work proposes optimizing the LLS module to face the abovementioned problems using the Greedy Randomized Adaptive Search Procedure (GRASP) and Continuous Greedy Randomized Adaptive Search Procedure (CGRASP) algorithms. The search for the spectral count peaks of the libraries leads to a partitioning of the data before applying the GRASP and CGRASP algorithms. The methodological procedures also address estimating the spectral counts of an unknown library possibly integrates the sample. The results show (1) efficient partitioning of the input data (2) evidence of suitable precision of the weight fractions of the libraries that make up the sample (average precision of the order of 3.16% against 8.8% of other methods); (3) success in the approximation and estimation of the unknown library (average precision of 4.25%) present in the sample. Our method proved to be promising in improving the determination of percentage count fractions by the least-squares module and showing the advantages of data partitioning.
We develop an open-source Python-based Parameter Estimation Tool utilizing Bayesian Optimization (petBOA) with a unique wrapper interface for gradient-free parameter estimation of expensive black-box kinetic models. We provide examples for Python macrokinetic and microkinetic modeling (MKM) tools, such as Cantera and OpenMKM. petBOA leverages surrogate Gaussian processes to approximate and minimize the objective function designed for parameter estimation. Bayesian Optimization (BO) is implemented using the open-source BoTorch toolkit. petBOA employs local and global sensitivity analyses to identify important parameters optimized against experimental data, and leverages pMuTT for consistent kinetic and thermodynamic parameters while perturbing species binding energies within the typical error of conventional DFT exchange-correlation functionals (20-30 kJ/mol). The source code and documentation are hosted on GitHub.
Program title: petBOA
Developer's repository link: https://github.com/VlachosGroup/petBOA
Licensing provisions: MIT license
Programming language: Python
External routines: NEXTorch, PyTorch, GPyTorch, BoTorch, Matplotlib, PyDOE2, NumPy, SciPy, pandas, pMuTT, SALib, docker.
Nature of the problem: An open-source, gradient-free parameter estimation of black-box microkinetic modeling tools, such as OpenMKM is lacking.
Solution method: petBOA is a Python-based tool that utilizes Bayesian Optimization and offers a unique wrapper interface for expensive black-box kinetic models. It leverages the pMuTT library for consistent kinetic and thermodynamic parameter estimation and employs both local and global sensitivity analyses to identify crucial parameters.
CoupledElectricMagneticDipoles.jl is a set of modules implemented in the Julia language. Several modules are provided to solve typical problems encountered in nano-optics and nano-photonics including light emission by point sources in complex environments, electromagnetic wave scattering by single objects with complex geometry or collections of them. Optical forces can also be computed with this software package.
Two closely related computational methods are implemented in this library, the discrete dipole approach (DDA) and the coupled electric and magnetic dipoles (CEMD) method.