首页 > 最新文献

arXiv - CS - Mathematical Software最新文献

英文 中文
A Sparse Fast Chebyshev Transform for High-Dimensional Approximation 高维逼近的稀疏快速切比雪夫变换
Pub Date : 2023-09-26 DOI: arxiv-2309.14584
Dalton Jones, Pierre-David Letourneau, Matthew J. Morse, M. Harper Langston
We present the Fast Chebyshev Transform (FCT), a fast, randomized algorithmto compute a Chebyshev approximation of functions in high-dimensions from theknowledge of the location of its nonzero Chebyshev coefficients. Rather thansampling a full-resolution Chebyshev grid in each dimension, we randomly sampleseveral grids with varied resolutions and solve a least-squares problem incoefficient space in order to compute a polynomial approximating the functionof interest across all grids simultaneously. We theoretically and empiricallyshow that the FCT exhibits quasi-linear scaling and high numerical accuracy onchallenging and complex high-dimensional problems. We demonstrate theeffectiveness of our approach compared to alternative Chebyshev approximationschemes. In particular, we highlight our algorithm's effectiveness in highdimensions, demonstrating significant speedups over commonly-used alternativetechniques.
我们提出了快速切比雪夫变换(FCT),这是一种快速的随机算法,可以根据非零切比雪夫系数的位置来计算高维函数的切比雪夫近似。我们不是在每个维度上采样一个全分辨率的切比雪夫网格,而是随机采样几个具有不同分辨率的网格,并在系数空间中求解最小二乘问题,以便同时在所有网格上计算一个近似感兴趣函数的多项式。理论和经验表明,FCT在具有挑战性和复杂的高维问题上具有准线性缩放和较高的数值精度。与其他切比雪夫近似方案相比,我们证明了该方法的有效性。特别地,我们强调了我们的算法在高维上的有效性,展示了比常用替代技术显著的加速。
{"title":"A Sparse Fast Chebyshev Transform for High-Dimensional Approximation","authors":"Dalton Jones, Pierre-David Letourneau, Matthew J. Morse, M. Harper Langston","doi":"arxiv-2309.14584","DOIUrl":"https://doi.org/arxiv-2309.14584","url":null,"abstract":"We present the Fast Chebyshev Transform (FCT), a fast, randomized algorithm\u0000to compute a Chebyshev approximation of functions in high-dimensions from the\u0000knowledge of the location of its nonzero Chebyshev coefficients. Rather than\u0000sampling a full-resolution Chebyshev grid in each dimension, we randomly sample\u0000several grids with varied resolutions and solve a least-squares problem in\u0000coefficient space in order to compute a polynomial approximating the function\u0000of interest across all grids simultaneously. We theoretically and empirically\u0000show that the FCT exhibits quasi-linear scaling and high numerical accuracy on\u0000challenging and complex high-dimensional problems. We demonstrate the\u0000effectiveness of our approach compared to alternative Chebyshev approximation\u0000schemes. In particular, we highlight our algorithm's effectiveness in high\u0000dimensions, demonstrating significant speedups over commonly-used alternative\u0000techniques.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"14 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
pyPPG: A Python toolbox for comprehensive photoplethysmography signal analysis pyPPG:一个用于全面光容积脉搏波信号分析的Python工具箱
Pub Date : 2023-09-24 DOI: arxiv-2309.13767
Marton A. Goda, Peter H. Charlton, Joachim A. Behar
Photoplethysmography is a non-invasive optical technique that measureschanges in blood volume within tissues. It is commonly and increasingly usedfor in a variety of research and clinical application to assess vasculardynamics and physiological parameters. Yet, contrary to heart rate variabilitymeasures, a field which has seen the development of stable standards andadvanced toolboxes and software, no such standards and open tools exist forcontinuous photoplethysmogram (PPG) analysis. Consequently, the primaryobjective of this research was to identify, standardize, implement and validatekey digital PPG biomarkers. This work describes the creation of a standardPython toolbox, denoted pyPPG, for long-term continuous PPG time seriesanalysis recorded using a standard finger-based transmission pulse oximeter.The improved PPG peak detector had an F1-score of 88.19% for thestate-of-the-art benchmark when evaluated on 2,054 adult polysomnographyrecordings totaling over 91 million reference beats. This algorithmoutperformed the open-source original Matlab implementation by ~5% whenbenchmarked on a subset of 100 randomly selected MESA recordings. More than3,000 fiducial points were manually annotated by two annotators in order tovalidate the fiducial points detector. The detector consistently demonstratedhigh performance, with a mean absolute error of less than 10 ms for allfiducial points. Based on these fiducial points, pyPPG engineers a set of 74PPG biomarkers. Studying the PPG time series variability using pyPPG canenhance our understanding of the manifestations and etiology of diseases. Thistoolbox can also be used for biomarker engineering in training data-drivenmodels. pyPPG is available on physiozoo.org
光容积脉搏波是一种非侵入性的光学技术,可测量组织内血容量的变化。它越来越多地用于各种研究和临床应用,以评估血管动力学和生理参数。然而,与心率变异性测量相反,这个领域已经有了稳定的标准和先进的工具箱和软件,但对于连续光容积脉搏图(PPG)分析,没有这样的标准和开放的工具。因此,本研究的主要目标是识别、标准化、实施和验证关键的数字PPG生物标志物。这项工作描述了一个标准python工具箱的创建,表示为pyPPG,用于使用标准的基于手指的传输脉搏血氧计记录的长期连续PPG时间序列分析。改进后的PPG峰值检测器在2054个成人多导睡眠描记仪记录总计超过9100万次参考节拍的评估中,具有最先进基准的f1得分为88.19%。当在100个随机选择的MESA录音的子集上进行基准测试时,该算法的性能优于开源原始Matlab实现约5%。为了验证基准点检测器,由两名标注员手动标注了3000多个基准点。该检测器始终表现出高性能,所有基点的平均绝对误差小于10 ms。基于这些基准点,pyPPG设计了一套74PPG生物标志物。利用pyPPG研究PPG时间序列变异性,可以提高我们对疾病表现和病因的认识。这个工具箱也可以用于训练数据驱动模型的生物标志物工程。pyPPG可在physiozoo.org上获得
{"title":"pyPPG: A Python toolbox for comprehensive photoplethysmography signal analysis","authors":"Marton A. Goda, Peter H. Charlton, Joachim A. Behar","doi":"arxiv-2309.13767","DOIUrl":"https://doi.org/arxiv-2309.13767","url":null,"abstract":"Photoplethysmography is a non-invasive optical technique that measures\u0000changes in blood volume within tissues. It is commonly and increasingly used\u0000for in a variety of research and clinical application to assess vascular\u0000dynamics and physiological parameters. Yet, contrary to heart rate variability\u0000measures, a field which has seen the development of stable standards and\u0000advanced toolboxes and software, no such standards and open tools exist for\u0000continuous photoplethysmogram (PPG) analysis. Consequently, the primary\u0000objective of this research was to identify, standardize, implement and validate\u0000key digital PPG biomarkers. This work describes the creation of a standard\u0000Python toolbox, denoted pyPPG, for long-term continuous PPG time series\u0000analysis recorded using a standard finger-based transmission pulse oximeter.\u0000The improved PPG peak detector had an F1-score of 88.19% for the\u0000state-of-the-art benchmark when evaluated on 2,054 adult polysomnography\u0000recordings totaling over 91 million reference beats. This algorithm\u0000outperformed the open-source original Matlab implementation by ~5% when\u0000benchmarked on a subset of 100 randomly selected MESA recordings. More than\u00003,000 fiducial points were manually annotated by two annotators in order to\u0000validate the fiducial points detector. The detector consistently demonstrated\u0000high performance, with a mean absolute error of less than 10 ms for all\u0000fiducial points. Based on these fiducial points, pyPPG engineers a set of 74\u0000PPG biomarkers. Studying the PPG time series variability using pyPPG can\u0000enhance our understanding of the manifestations and etiology of diseases. This\u0000toolbox can also be used for biomarker engineering in training data-driven\u0000models. pyPPG is available on physiozoo.org","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"10 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics Informed Neural Network Code for 2D Transient Problems (PINN-2DT) Compatible with Google Colab 物理通知神经网络代码二维瞬态问题(PINN-2DT)与谷歌Colab兼容
Pub Date : 2023-09-24 DOI: arxiv-2310.03755
Paweł Maczuga, Maciej Skoczeń, Przemysław Rożnawski, Filip Tłuszcz, Marcin Szubert, Marcin Łoś, Witold Dzwinel, Keshav Pingali, Maciej Paszyński
We present an open-source Physics Informed Neural Network environment forsimulations of transient phenomena on two-dimensional rectangular domains, withthe following features: (1) it is compatible with Google Colab which allowsautomatic execution on cloud environment; (2) it supports two dimensionaltime-dependent PDEs; (3) it provides simple interface for definition of theresidual loss, boundary condition and initial loss, together with theirweights; (4) it support Neumann and Dirichlet boundary conditions; (5) itallows for customizing the number of layers and neurons per layer, as well asfor arbitrary activation function; (6) the learning rate and number of epochsare available as parameters; (7) it automatically differentiates PINN withrespect to spatial and temporal variables; (8) it provides routines forplotting the convergence (with running average), initial conditions learnt, 2Dand 3D snapshots from the simulation and movies (9) it includes a library ofproblems: (a) non-stationary heat transfer; (b) wave equation modeling atsunami; (c) atmospheric simulations including thermal inversion; (d) tumorgrowth simulations.
我们提出了一个开源的物理信息神经网络环境,用于模拟二维矩形域上的瞬态现象,具有以下特点:(1)它与Google Colab兼容,允许在云环境下自动执行;(2)支持二维时变偏微分方程;(3)为剩余损失、边界条件和初始损失及其权值的定义提供了简单的界面;(4)支持Neumann和Dirichlet边界条件;(5)允许自定义层数和每层神经元数,以及任意激活函数;(6)可作为参数的学习率和epoch个数;(7)根据时空变量自动区分PINN;(8)它提供了用于绘制收敛(具有运行平均值)的例程,学习的初始条件,模拟和电影中的2d和3D快照(9)它包括一个问题库:(a)非平稳传热;(b)波浪方程模拟海啸;(c)大气模拟,包括热反演;(d)肿瘤生长模拟。
{"title":"Physics Informed Neural Network Code for 2D Transient Problems (PINN-2DT) Compatible with Google Colab","authors":"Paweł Maczuga, Maciej Skoczeń, Przemysław Rożnawski, Filip Tłuszcz, Marcin Szubert, Marcin Łoś, Witold Dzwinel, Keshav Pingali, Maciej Paszyński","doi":"arxiv-2310.03755","DOIUrl":"https://doi.org/arxiv-2310.03755","url":null,"abstract":"We present an open-source Physics Informed Neural Network environment for\u0000simulations of transient phenomena on two-dimensional rectangular domains, with\u0000the following features: (1) it is compatible with Google Colab which allows\u0000automatic execution on cloud environment; (2) it supports two dimensional\u0000time-dependent PDEs; (3) it provides simple interface for definition of the\u0000residual loss, boundary condition and initial loss, together with their\u0000weights; (4) it support Neumann and Dirichlet boundary conditions; (5) it\u0000allows for customizing the number of layers and neurons per layer, as well as\u0000for arbitrary activation function; (6) the learning rate and number of epochs\u0000are available as parameters; (7) it automatically differentiates PINN with\u0000respect to spatial and temporal variables; (8) it provides routines for\u0000plotting the convergence (with running average), initial conditions learnt, 2D\u0000and 3D snapshots from the simulation and movies (9) it includes a library of\u0000problems: (a) non-stationary heat transfer; (b) wave equation modeling a\u0000tsunami; (c) atmospheric simulations including thermal inversion; (d) tumor\u0000growth simulations.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"16 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble Differential Evolution with Simulation-Based Hybridization and Self-Adaptation for Inventory Management Under Uncertainty 不确定条件下基于仿真杂交的集成差分进化与自适应库存管理
Pub Date : 2023-09-22 DOI: arxiv-2309.12852
Sarit Maitra, Vivek Mishra, Sukanya Kundu
This study proposes an Ensemble Differential Evolution with Simula-tion-BasedHybridization and Self-Adaptation (EDESH-SA) approach for inven-tory management(IM) under uncertainty. In this study, DE with multiple runs is combined with asimulation-based hybridization method that includes a self-adaptive mechanismthat dynamically alters mutation and crossover rates based on the success orfailure of each iteration. Due to its adaptability, the algorithm is able tohandle the complexity and uncertainty present in IM. Utilizing Monte CarloSimulation (MCS), the continuous review (CR) inventory strategy is ex-aminedwhile accounting for stochasticity and various demand scenarios. Thissimulation-based approach enables a realistic assessment of the proposedalgo-rithm's applicability in resolving the challenges faced by IM in practicalsettings. The empirical findings demonstrate the potential of the proposedmethod to im-prove the financial performance of IM and optimize large searchspaces. The study makes use of performance testing with the Ackley function andSensitivity Analysis with Perturbations to investigate how changes in variablesaffect the objective value. This analysis provides valuable insights into thebehavior and robustness of the algorithm.
针对不确定条件下的库存管理问题,提出了一种基于仿真杂交自适应的集成差分进化方法(EDESH-SA)。在本研究中,多次运行的DE与基于模拟的杂交方法相结合,该方法包括一个自适应机制,该机制根据每次迭代的成功或失败动态改变突变和交叉率。该算法具有较强的适应性,能够处理即时通信中存在的复杂性和不确定性。利用蒙特卡罗模拟(MCS),在考虑随机性和各种需求情景的情况下,研究了连续评审(CR)库存策略。这种基于模拟的方法可以对所提出的算法在解决IM在实际环境中面临的挑战时的适用性进行现实评估。实证结果表明,本文提出的方法在提高即时通讯的财务绩效和优化大型搜索空间方面具有潜力。本研究利用Ackley函数的性能测试和扰动敏感性分析来研究变量的变化对目标值的影响。这种分析为算法的行为和鲁棒性提供了有价值的见解。
{"title":"Ensemble Differential Evolution with Simulation-Based Hybridization and Self-Adaptation for Inventory Management Under Uncertainty","authors":"Sarit Maitra, Vivek Mishra, Sukanya Kundu","doi":"arxiv-2309.12852","DOIUrl":"https://doi.org/arxiv-2309.12852","url":null,"abstract":"This study proposes an Ensemble Differential Evolution with Simula-tion-Based\u0000Hybridization and Self-Adaptation (EDESH-SA) approach for inven-tory management\u0000(IM) under uncertainty. In this study, DE with multiple runs is combined with a\u0000simulation-based hybridization method that includes a self-adaptive mechanism\u0000that dynamically alters mutation and crossover rates based on the success or\u0000failure of each iteration. Due to its adaptability, the algorithm is able to\u0000handle the complexity and uncertainty present in IM. Utilizing Monte Carlo\u0000Simulation (MCS), the continuous review (CR) inventory strategy is ex-amined\u0000while accounting for stochasticity and various demand scenarios. This\u0000simulation-based approach enables a realistic assessment of the proposed\u0000algo-rithm's applicability in resolving the challenges faced by IM in practical\u0000settings. The empirical findings demonstrate the potential of the proposed\u0000method to im-prove the financial performance of IM and optimize large search\u0000spaces. The study makes use of performance testing with the Ackley function and\u0000Sensitivity Analysis with Perturbations to investigate how changes in variables\u0000affect the objective value. This analysis provides valuable insights into the\u0000behavior and robustness of the algorithm.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"12 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking massively parallel spectral proper orthogonal decompositions in the PySPOD package 在PySPOD包中解锁大规模平行光谱适当正交分解
Pub Date : 2023-09-21 DOI: arxiv-2309.11808
Marcin Rogowski, Brandon C. Y. Yeung, Oliver T. Schmidt, Romit Maulik, Lisandro Dalcin, Matteo Parsani, Gianmarco Mengaldo
We propose a parallel (distributed) version of the spectral proper orthogonaldecomposition (SPOD) technique. The parallel SPOD algorithm distributes thespatial dimension of the dataset preserving time. This approach is adopted topreserve the non-distributed fast Fourier transform of the data in time,thereby avoiding the associated bottlenecks. The parallel SPOD algorithm isimplemented in the PySPOD (https://github.com/MathEXLab/PySPOD) library andmakes use of the standard message passing interface (MPI) library, implementedin Python via mpi4py (https://mpi4py.readthedocs.io/en/stable/). An extensiveperformance evaluation of the parallel package is provided, including strongand weak scalability analyses. The open-source library allows the analysis oflarge datasets of interest across the scientific community. Here, we presentapplications in fluid dynamics and geophysics, that are extremely difficult (ifnot impossible) to achieve without a parallel algorithm. This work opens thepath toward modal analyses of big quasi-stationary data, helping to uncover newunexplored spatio-temporal patterns.
我们提出了一个平行(分布)版本的光谱适当正交分解(SPOD)技术。并行SPOD算法对数据集的空间维数进行了分配。采用该方法可以及时保留数据的非分布快速傅里叶变换,从而避免了相关的瓶颈。并行SPOD算法在PySPOD (https://github.com/MathEXLab/PySPOD)库中实现,并使用标准消息传递接口(MPI)库,该库通过mpi4py (https://mpi4py.readthedocs.io/en/stable/)在Python中实现。对并行包进行了广泛的性能评估,包括强扩展性和弱扩展性分析。开源库允许对科学界感兴趣的大型数据集进行分析。在这里,我们介绍了流体动力学和地球物理学中的应用,如果没有并行算法,这些应用是非常困难的(如果不是不可能的话)。这项工作为大型准平稳数据的模态分析开辟了道路,有助于揭示新的未探索的时空模式。
{"title":"Unlocking massively parallel spectral proper orthogonal decompositions in the PySPOD package","authors":"Marcin Rogowski, Brandon C. Y. Yeung, Oliver T. Schmidt, Romit Maulik, Lisandro Dalcin, Matteo Parsani, Gianmarco Mengaldo","doi":"arxiv-2309.11808","DOIUrl":"https://doi.org/arxiv-2309.11808","url":null,"abstract":"We propose a parallel (distributed) version of the spectral proper orthogonal\u0000decomposition (SPOD) technique. The parallel SPOD algorithm distributes the\u0000spatial dimension of the dataset preserving time. This approach is adopted to\u0000preserve the non-distributed fast Fourier transform of the data in time,\u0000thereby avoiding the associated bottlenecks. The parallel SPOD algorithm is\u0000implemented in the PySPOD (https://github.com/MathEXLab/PySPOD) library and\u0000makes use of the standard message passing interface (MPI) library, implemented\u0000in Python via mpi4py (https://mpi4py.readthedocs.io/en/stable/). An extensive\u0000performance evaluation of the parallel package is provided, including strong\u0000and weak scalability analyses. The open-source library allows the analysis of\u0000large datasets of interest across the scientific community. Here, we present\u0000applications in fluid dynamics and geophysics, that are extremely difficult (if\u0000not impossible) to achieve without a parallel algorithm. This work opens the\u0000path toward modal analyses of big quasi-stationary data, helping to uncover new\u0000unexplored spatio-temporal patterns.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Satisfiability.jl: Satisfiability Modulo Theories in Julia 可满足性。Julia中的可满足模理论
Pub Date : 2023-09-15 DOI: arxiv-2309.08778
Emiko Soroka, Mykel J. Kochenderfer, Sanjay Lall
Satisfiability modulo theories (SMT) is a core tool in formal verification.While the SMT-LIB specification language can be used to interact with theoremproving software, a high-level interface allows for faster and easierspecifications of complex SMT formulae. In this paper we discuss the design andimplementation of a novel publicly-available interface for interacting withSMT-LIB compliant solvers in the Julia programming language.
可满足模理论(SMT)是形式化验证的核心工具。虽然SMT- lib规范语言可用于与定理改进软件交互,但高级接口允许更快、更容易地规范复杂的SMT公式。在本文中,我们讨论了一种新的公共可用接口的设计和实现,用于在Julia编程语言中与smt - lib兼容的求解器进行交互。
{"title":"Satisfiability.jl: Satisfiability Modulo Theories in Julia","authors":"Emiko Soroka, Mykel J. Kochenderfer, Sanjay Lall","doi":"arxiv-2309.08778","DOIUrl":"https://doi.org/arxiv-2309.08778","url":null,"abstract":"Satisfiability modulo theories (SMT) is a core tool in formal verification.\u0000While the SMT-LIB specification language can be used to interact with theorem\u0000proving software, a high-level interface allows for faster and easier\u0000specifications of complex SMT formulae. In this paper we discuss the design and\u0000implementation of a novel publicly-available interface for interacting with\u0000SMT-LIB compliant solvers in the Julia programming language.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"12 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
$texttt{ChisholmD.wl}$- Automated rational approximant for bi-variate series $texttt{ChisholmD.wl}$-双变量序列的自动有理逼近
Pub Date : 2023-09-14 DOI: arxiv-2309.07687
Souvik Bera, Tanay Pathak
The Chisholm rational approximant is a natural generalization to twovariables of the well-known single variable Pad'e approximant, and has theadvantage of reducing to the latter when one of the variables is set equals to0. We present, to our knowledge, the first automated Mathematica package toevaluate diagonal Chisholm approximants of two variable series. For the moment,the package can only be used to evaluate diagonal approximants i.e. the maximumpowers of both the variables, in both the numerator and the denominator, isequal to some integer $M$. We further modify the original method so as to allowus to evaluate the approximants around some general point $(x,y)$ notnecessarily $(0,0)$. Using the approximants around general point $(x,y)$,allows us to get a better estimate of the result when the point of evaluationis far from $(0,0)$. Several examples of the elementary functions have beenstudied which shows that the approximants can be useful for analyticcontinuation and convergence acceleration purposes. We continue our study usingvarious examples of two variable hypergeometric series,$mathrm{Li}_{2,2}(x,y)$ etc that arise in particle physics and in the study ofcritical phenomena in condensed matter physics. The demonstration of thepackage is discussed in detail and the Mathematica package is provided as anancillary file.
Chisholm有理近似是对众所周知的单变量Pad'e近似的两变量的自然推广,当其中一个变量被设为等于0时,它具有简化为后者的优点。据我们所知,我们提出了第一个自动化的Mathematica软件包来评估两个变量序列的对角线Chisholm近似值。目前,该包只能用于评估对角近似值,即两个变量的最大幂,在分子和分母中,等于某个整数$M$。我们进一步修改了原方法,使其可以求出一般点$(x,y)$的近似值,而不一定是$(0,0)$。使用一般点$(x,y)$附近的近似值,当计算点远离$(0,0)$时,我们可以更好地估计结果。对初等函数的几个例子进行了研究,结果表明,近似值可以用于解析延拓和收敛加速目的。我们继续使用在粒子物理和凝聚态物理的临界现象研究中出现的两个变量超几何级数,$ mathm {Li}_{2,2}(x,y)$等的各种例子进行研究。详细讨论了软件包的演示,并将Mathematica软件包作为辅助文件提供。
{"title":"$texttt{ChisholmD.wl}$- Automated rational approximant for bi-variate series","authors":"Souvik Bera, Tanay Pathak","doi":"arxiv-2309.07687","DOIUrl":"https://doi.org/arxiv-2309.07687","url":null,"abstract":"The Chisholm rational approximant is a natural generalization to two\u0000variables of the well-known single variable Pad'e approximant, and has the\u0000advantage of reducing to the latter when one of the variables is set equals to\u00000. We present, to our knowledge, the first automated Mathematica package to\u0000evaluate diagonal Chisholm approximants of two variable series. For the moment,\u0000the package can only be used to evaluate diagonal approximants i.e. the maximum\u0000powers of both the variables, in both the numerator and the denominator, is\u0000equal to some integer $M$. We further modify the original method so as to allow\u0000us to evaluate the approximants around some general point $(x,y)$ not\u0000necessarily $(0,0)$. Using the approximants around general point $(x,y)$,\u0000allows us to get a better estimate of the result when the point of evaluation\u0000is far from $(0,0)$. Several examples of the elementary functions have been\u0000studied which shows that the approximants can be useful for analytic\u0000continuation and convergence acceleration purposes. We continue our study using\u0000various examples of two variable hypergeometric series,\u0000$mathrm{Li}_{2,2}(x,y)$ etc that arise in particle physics and in the study of\u0000critical phenomena in condensed matter physics. The demonstration of the\u0000package is discussed in detail and the Mathematica package is provided as an\u0000ancillary file.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Distributed Data-Parallel PyTorch Implementation of the Distributed Shampoo Optimizer for Training Neural Networks At-Scale 用于大规模训练神经网络的分布式洗发水优化器的分布式数据并行PyTorch实现
Pub Date : 2023-09-12 DOI: arxiv-2309.06497
Hao-Jun Michael Shi, Tsung-Hsien Lee, Shintaro Iwasaki, Jose Gallego-Posada, Zhijing Li, Kaushik Rangadurai, Dheevatsa Mudigere, Michael Rabbat
Shampoo is an online and stochastic optimization algorithm belonging to theAdaGrad family of methods for training neural networks. It constructs ablock-diagonal preconditioner where each block consists of a coarse Kroneckerproduct approximation to full-matrix AdaGrad for each parameter of the neuralnetwork. In this work, we provide a complete description of the algorithm aswell as the performance optimizations that our implementation leverages totrain deep networks at-scale in PyTorch. Our implementation enables fastmulti-GPU distributed data-parallel training by distributing the memory andcomputation associated with blocks of each parameter via PyTorch's DTensor datastructure and performing an AllGather primitive on the computed searchdirections at each iteration. This major performance enhancement enables us toachieve at most a 10% performance reduction in per-step wall-clock timecompared against standard diagonal-scaling-based adaptive gradient methods. Wevalidate our implementation by performing an ablation study on trainingImageNet ResNet50, demonstrating Shampoo's superiority over standard trainingrecipes with minimal hyperparameter tuning.
Shampoo是一种在线随机优化算法,属于adagrad系列训练神经网络的方法。它构造了块对角预调节器,其中每个块由对神经网络的每个参数的全矩阵AdaGrad的粗Kroneckerproduct近似组成。在这项工作中,我们提供了算法的完整描述,以及我们的实现在PyTorch中大规模训练深度网络所利用的性能优化。我们的实现通过PyTorch的DTensor数据结构分配与每个参数块相关的内存和计算,并在每次迭代时对计算的搜索方向执行AllGather原语,从而实现快速的多gpu分布式数据并行训练。与标准的基于对角线缩放的自适应梯度方法相比,这种主要的性能增强使我们能够实现每步时钟时间最多10%的性能降低。我们通过对trainingImageNet ResNet50进行消融研究来验证我们的实现,通过最小的超参数调优证明了Shampoo优于标准训练配方。
{"title":"A Distributed Data-Parallel PyTorch Implementation of the Distributed Shampoo Optimizer for Training Neural Networks At-Scale","authors":"Hao-Jun Michael Shi, Tsung-Hsien Lee, Shintaro Iwasaki, Jose Gallego-Posada, Zhijing Li, Kaushik Rangadurai, Dheevatsa Mudigere, Michael Rabbat","doi":"arxiv-2309.06497","DOIUrl":"https://doi.org/arxiv-2309.06497","url":null,"abstract":"Shampoo is an online and stochastic optimization algorithm belonging to the\u0000AdaGrad family of methods for training neural networks. It constructs a\u0000block-diagonal preconditioner where each block consists of a coarse Kronecker\u0000product approximation to full-matrix AdaGrad for each parameter of the neural\u0000network. In this work, we provide a complete description of the algorithm as\u0000well as the performance optimizations that our implementation leverages to\u0000train deep networks at-scale in PyTorch. Our implementation enables fast\u0000multi-GPU distributed data-parallel training by distributing the memory and\u0000computation associated with blocks of each parameter via PyTorch's DTensor data\u0000structure and performing an AllGather primitive on the computed search\u0000directions at each iteration. This major performance enhancement enables us to\u0000achieve at most a 10% performance reduction in per-step wall-clock time\u0000compared against standard diagonal-scaling-based adaptive gradient methods. We\u0000validate our implementation by performing an ablation study on training\u0000ImageNet ResNet50, demonstrating Shampoo's superiority over standard training\u0000recipes with minimal hyperparameter tuning.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of Quantum Accelerators with High Performance Computing $unicode{x2013}$ A Review of Quantum Programming Tools 量子加速器与高性能计算的集成$unicode{x2013}$量子编程工具综述
Pub Date : 2023-09-12 DOI: arxiv-2309.06167
Amr Elsharkawy, Xiao-Ting Michelle To, Philipp Seitz, Yanbin Chen, Yannick Stade, Manuel Geiger, Qunsheng Huang, Xiaorang Guo, Muhammad Arslan Ansari, Christian B. Mendl, Dieter Kranzlmüller, Martin Schulz
Quantum computing (QC) introduces a novel mode of computation with thepossibility of greater computational power that remains to be exploited$unicode{x2013}$ presenting exciting opportunities for high performancecomputing (HPC) applications. However, recent advancements in the field havemade clear that QC does not supplant conventional HPC, but can rather beincorporated into current heterogeneous HPC infrastructures as an additionalaccelerator, thereby enabling the optimal utilization of both paradigms. Thedesire for such integration significantly affects the development of softwarefor quantum computers, which in turn influences the necessary softwareinfrastructure. To date, previous review papers have investigated variousquantum programming tools (QPTs) (such as languages, libraries, frameworks) intheir ability to program, compile, and execute quantum circuits. However, theintegration effort with classical HPC frameworks or systems has not beenaddressed. This study aims to characterize existing QPTs from an HPCperspective, investigating if existing QPTs have the potential to beefficiently integrated with classical computing models and determining wherework is still required. This work structures a set of criteria into an analysisblueprint that enables HPC scientists to assess whether a QPT is suitable forthe quantum-accelerated classical application at hand.
量子计算(QC)引入了一种新的计算模式,具有更大计算能力的可能性,仍有待开发$unicode{x2013}$为高性能计算(HPC)应用提供令人兴奋的机会。然而,该领域的最新进展表明,QC并不能取代传统的HPC,而是可以作为额外的加速器纳入当前异构HPC基础设施,从而实现两种模式的最佳利用。对这种集成的渴望极大地影响了量子计算机软件的开发,这反过来又影响了必要的软件基础设施。迄今为止,以前的评论论文已经研究了各种量子编程工具(qpt)(如语言,库,框架)的编程,编译和执行量子电路的能力。然而,与经典HPC框架或系统的集成工作尚未得到解决。本研究旨在从高性能计算的角度描述现有的qpt,调查现有的qpt是否有潜力与经典计算模型有效地集成,并确定哪些工作仍然需要做。这项工作将一组标准构建成一个分析蓝图,使高性能计算科学家能够评估QPT是否适合手头的量子加速经典应用。
{"title":"Integration of Quantum Accelerators with High Performance Computing $unicode{x2013}$ A Review of Quantum Programming Tools","authors":"Amr Elsharkawy, Xiao-Ting Michelle To, Philipp Seitz, Yanbin Chen, Yannick Stade, Manuel Geiger, Qunsheng Huang, Xiaorang Guo, Muhammad Arslan Ansari, Christian B. Mendl, Dieter Kranzlmüller, Martin Schulz","doi":"arxiv-2309.06167","DOIUrl":"https://doi.org/arxiv-2309.06167","url":null,"abstract":"Quantum computing (QC) introduces a novel mode of computation with the\u0000possibility of greater computational power that remains to be exploited\u0000$unicode{x2013}$ presenting exciting opportunities for high performance\u0000computing (HPC) applications. However, recent advancements in the field have\u0000made clear that QC does not supplant conventional HPC, but can rather be\u0000incorporated into current heterogeneous HPC infrastructures as an additional\u0000accelerator, thereby enabling the optimal utilization of both paradigms. The\u0000desire for such integration significantly affects the development of software\u0000for quantum computers, which in turn influences the necessary software\u0000infrastructure. To date, previous review papers have investigated various\u0000quantum programming tools (QPTs) (such as languages, libraries, frameworks) in\u0000their ability to program, compile, and execute quantum circuits. However, the\u0000integration effort with classical HPC frameworks or systems has not been\u0000addressed. This study aims to characterize existing QPTs from an HPC\u0000perspective, investigating if existing QPTs have the potential to be\u0000efficiently integrated with classical computing models and determining where\u0000work is still required. This work structures a set of criteria into an analysis\u0000blueprint that enables HPC scientists to assess whether a QPT is suitable for\u0000the quantum-accelerated classical application at hand.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"18 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CDL: A fast and flexible library for the study of permutation sets with structural restrictions CDL:一个快速灵活的库,用于研究具有结构限制的置换集
Pub Date : 2023-09-12 DOI: arxiv-2309.06306
Bei Zhou, Klas Markstrōm, Søren Riis
In this paper, we introduce CDL, a software library designed for the analysisof permutations and linear orders subject to various structural restrictions.Prominent examples of these restrictions include pattern avoidance, a topic ofinterest in both computer science and combinatorics, and "never conditions"utilized in social choice and voting theory. CDL offers a range of fundamental functionalities, including identifying thepermutations that meet specific restrictions and determining the isomorphism ofsuch sets. To facilitate exploration across extensive domains, CDL incorporatesmultiple search strategies and heuristics.
在本文中,我们介绍了CDL,一个用于分析受各种结构限制的排列和线性顺序的软件库。这些限制的突出例子包括模式回避,这是计算机科学和组合学都感兴趣的话题,以及在社会选择和投票理论中使用的“永不条件”。CDL提供了一系列基本功能,包括识别满足特定限制的排列和确定这些集合的同构性。为了促进跨广泛领域的探索,CDL结合了多种搜索策略和启发式方法。
{"title":"CDL: A fast and flexible library for the study of permutation sets with structural restrictions","authors":"Bei Zhou, Klas Markstrōm, Søren Riis","doi":"arxiv-2309.06306","DOIUrl":"https://doi.org/arxiv-2309.06306","url":null,"abstract":"In this paper, we introduce CDL, a software library designed for the analysis\u0000of permutations and linear orders subject to various structural restrictions.\u0000Prominent examples of these restrictions include pattern avoidance, a topic of\u0000interest in both computer science and combinatorics, and \"never conditions\"\u0000utilized in social choice and voting theory. CDL offers a range of fundamental functionalities, including identifying the\u0000permutations that meet specific restrictions and determining the isomorphism of\u0000such sets. To facilitate exploration across extensive domains, CDL incorporates\u0000multiple search strategies and heuristics.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"10 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Mathematical Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1