The Prony method for approximating signals comprising sinusoidal/exponential components is known through the pioneering work of Prony in his seminal dissertation in the year 1795. However, the Prony method saw the light of real world application only upon the advent of the computational era, which made feasible the extensive numerical intricacies and labor which the method demands inherently. The Adaptive LMS Filter which has been the most pervasive method for signal filtration and approximation since its inception in 1965 does not provide a consistently assured level of highly precise results as the extended experiment in this work proves. As a remedy this study improvises upon the Prony method by observing that a better (more precise) computational approximation can be obtained under the premise that adjustment can be made for computational error , in the autoregressive model setup in the initial step of the Prony computation itself. This adjustment is in proportion to the deviation of the coefficients in the same autoregressive model. The results obtained by this improvisation live up to the expectations of obtaining consistency and higher value in the precision of the output (recovered signal) approximations as shown in this current work and as compared with the results obtained using the Adaptive LMS Filter.
{"title":"A prony method variant which surpasses the Adaptive LMS filter in the output signal's representation of input","authors":"Parthasarathy Srinivasan","doi":"arxiv-2409.01272","DOIUrl":"https://doi.org/arxiv-2409.01272","url":null,"abstract":"The Prony method for approximating signals comprising sinusoidal/exponential\u0000components is known through the pioneering work of Prony in his seminal\u0000dissertation in the year 1795. However, the Prony method saw the light of real\u0000world application only upon the advent of the computational era, which made\u0000feasible the extensive numerical intricacies and labor which the method demands\u0000inherently. The Adaptive LMS Filter which has been the most pervasive method\u0000for signal filtration and approximation since its inception in 1965 does not\u0000provide a consistently assured level of highly precise results as the extended\u0000experiment in this work proves. As a remedy this study improvises upon the\u0000Prony method by observing that a better (more precise) computational\u0000approximation can be obtained under the premise that adjustment can be made for\u0000computational error , in the autoregressive model setup in the initial step of\u0000the Prony computation itself. This adjustment is in proportion to the deviation\u0000of the coefficients in the same autoregressive model. The results obtained by\u0000this improvisation live up to the expectations of obtaining consistency and\u0000higher value in the precision of the output (recovered signal) approximations\u0000as shown in this current work and as compared with the results obtained using\u0000the Adaptive LMS Filter.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sibo Cheng, Jinyang Min, Che Liu, Rossella Arcucci
Data assimilation techniques are often confronted with challenges handling complex high dimensional physical systems, because high precision simulation in complex high dimensional physical systems is computationally expensive and the exact observation functions that can be applied in these systems are difficult to obtain. It prompts growing interest in integrating deep learning models within data assimilation workflows, but current software packages for data assimilation cannot handle deep learning models inside. This study presents a novel Python package seamlessly combining data assimilation with deep neural networks to serve as models for state transition and observation functions. The package, named TorchDA, implements Kalman Filter, Ensemble Kalman Filter (EnKF), 3D Variational (3DVar), and 4D Variational (4DVar) algorithms, allowing flexible algorithm selection based on application requirements. Comprehensive experiments conducted on the Lorenz 63 and a two-dimensional shallow water system demonstrate significantly enhanced performance over standalone model predictions without assimilation. The shallow water analysis validates data assimilation capabilities mapping between different physical quantity spaces in either full space or reduced order space. Overall, this innovative software package enables flexible integration of deep learning representations within data assimilation, conferring a versatile tool to tackle complex high dimensional dynamical systems across scientific domains.
{"title":"TorchDA: A Python package for performing data assimilation with deep learning forward and transformation functions","authors":"Sibo Cheng, Jinyang Min, Che Liu, Rossella Arcucci","doi":"arxiv-2409.00244","DOIUrl":"https://doi.org/arxiv-2409.00244","url":null,"abstract":"Data assimilation techniques are often confronted with challenges handling\u0000complex high dimensional physical systems, because high precision simulation in\u0000complex high dimensional physical systems is computationally expensive and the\u0000exact observation functions that can be applied in these systems are difficult\u0000to obtain. It prompts growing interest in integrating deep learning models\u0000within data assimilation workflows, but current software packages for data\u0000assimilation cannot handle deep learning models inside. This study presents a\u0000novel Python package seamlessly combining data assimilation with deep neural\u0000networks to serve as models for state transition and observation functions. The\u0000package, named TorchDA, implements Kalman Filter, Ensemble Kalman Filter\u0000(EnKF), 3D Variational (3DVar), and 4D Variational (4DVar) algorithms, allowing\u0000flexible algorithm selection based on application requirements. Comprehensive\u0000experiments conducted on the Lorenz 63 and a two-dimensional shallow water\u0000system demonstrate significantly enhanced performance over standalone model\u0000predictions without assimilation. The shallow water analysis validates data\u0000assimilation capabilities mapping between different physical quantity spaces in\u0000either full space or reduced order space. Overall, this innovative software\u0000package enables flexible integration of deep learning representations within\u0000data assimilation, conferring a versatile tool to tackle complex high\u0000dimensional dynamical systems across scientific domains.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we introduce HOBOTAN, a new solver designed for Higher Order Binary Optimization (HOBO). HOBOTAN supports both CPU and GPU, with the GPU version developed based on PyTorch, offering a fast and scalable system. This solver utilizes tensor networks to solve combinatorial optimization problems, employing a HOBO tensor that maps the problem and performs tensor contractions as needed. Additionally, by combining techniques such as batch processing for tensor optimization and binary-based integer encoding, we significantly enhance the efficiency of combinatorial optimization. In the future, the utilization of increased GPU numbers is expected to harness greater computational power, enabling efficient collaboration between multiple GPUs for high scalability. Moreover, HOBOTAN is designed within the framework of quantum computing, thus providing insights for future quantum computer applications. This paper details the design, implementation, performance evaluation, and scalability of HOBOTAN, demonstrating its effectiveness.
{"title":"HOBOTAN: Efficient Higher Order Binary Optimization Solver with Tensor Networks and PyTorch","authors":"Shoya Yasuda, Shunsuke Sotobayashi, Yuichiro Minato","doi":"arxiv-2407.19987","DOIUrl":"https://doi.org/arxiv-2407.19987","url":null,"abstract":"In this study, we introduce HOBOTAN, a new solver designed for Higher Order\u0000Binary Optimization (HOBO). HOBOTAN supports both CPU and GPU, with the GPU\u0000version developed based on PyTorch, offering a fast and scalable system. This\u0000solver utilizes tensor networks to solve combinatorial optimization problems,\u0000employing a HOBO tensor that maps the problem and performs tensor contractions\u0000as needed. Additionally, by combining techniques such as batch processing for\u0000tensor optimization and binary-based integer encoding, we significantly enhance\u0000the efficiency of combinatorial optimization. In the future, the utilization of\u0000increased GPU numbers is expected to harness greater computational power,\u0000enabling efficient collaboration between multiple GPUs for high scalability.\u0000Moreover, HOBOTAN is designed within the framework of quantum computing, thus\u0000providing insights for future quantum computer applications. This paper details\u0000the design, implementation, performance evaluation, and scalability of HOBOTAN,\u0000demonstrating its effectiveness.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"124 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefano Chiaradonna, Petar Jevtic, Beckett Sterner
We present a Python package called Modular Petri Net Assembly Toolkit (MPAT) that empowers users to easily create large-scale, modular Petri Nets for various spatial configurations, including extensive spatial grids or those derived from shape files, augmented with heterogeneous information layers. Petri Nets are powerful discrete event system modeling tools in computational biology and engineering. However, their utility for automated construction of large-scale spatial models has been limited by gaps in existing modeling software packages. MPAT addresses this gap by supporting the development of modular Petri Net models with flexible spatial geometries.
我们介绍了一个名为模块化 Petri 网组装工具包(MPAT)的 Python 软件包,它能帮助用户轻松创建各种空间配置的大规模模块化 Petri 网,包括广泛的空间网格或从形状文件中生成的网格,并添加异构信息层。Petri 网是计算生物学和工程学领域强大的离散事件系统建模工具。Petri 网是计算生物学和工程学中强大的离散事件系统建模工具,但由于现有建模软件包的缺陷,其在自动构建大规模空间模型方面的实用性受到了限制。MPAT 支持开发具有灵活空间几何结构的模块化 Petri 网模型,从而弥补了这一不足。
{"title":"MPAT: Modular Petri Net Assembly Toolkit","authors":"Stefano Chiaradonna, Petar Jevtic, Beckett Sterner","doi":"arxiv-2407.10372","DOIUrl":"https://doi.org/arxiv-2407.10372","url":null,"abstract":"We present a Python package called Modular Petri Net Assembly Toolkit (MPAT)\u0000that empowers users to easily create large-scale, modular Petri Nets for\u0000various spatial configurations, including extensive spatial grids or those\u0000derived from shape files, augmented with heterogeneous information layers.\u0000Petri Nets are powerful discrete event system modeling tools in computational\u0000biology and engineering. However, their utility for automated construction of\u0000large-scale spatial models has been limited by gaps in existing modeling\u0000software packages. MPAT addresses this gap by supporting the development of\u0000modular Petri Net models with flexible spatial geometries.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kacper Derlatka, Maciej Manna, Oleksii Bulenok, David Zwicker, Sylwester Arabas
The numba-mpi package offers access to the Message Passing Interface (MPI) routines from Python code that uses the Numba just-in-time (JIT) compiler. As a result, high-performance and multi-threaded Python code may utilize MPI communication facilities without leaving the JIT-compiled code blocks, which is not possible with the mpi4py package, a higher-level Python interface to MPI. For debugging purposes, numba-mpi retains full functionality of the code even if the JIT compilation is disabled. The numba-mpi API constitutes a thin wrapper around the C API of MPI and is built around Numpy arrays including handling of non-contiguous views over array slices. Project development is hosted at GitHub leveraging the mpi4py/setup-mpi workflow enabling continuous integration tests on Linux (MPICH, OpenMPI & Intel MPI), macOS (MPICH &