首页 > 最新文献

arXiv - STAT - Computation最新文献

英文 中文
Model-Embedded Gaussian Process Regression for Parameter Estimation in Dynamical System 用于动态系统参数估计的模型嵌入式高斯过程回归
Pub Date : 2024-09-18 DOI: arxiv-2409.11745
Ying Zhou, Jinglai Li, Xiang Zhou, Hongqiao Wang
Identifying dynamical system (DS) is a vital task in science and engineering.Traditional methods require numerous calls to the DS solver, renderinglikelihood-based or least-squares inference frameworks impractical. Forefficient parameter inference, two state-of-the-art techniques are the kernelmethod for modeling and the "one-step framework" for jointly inferring unknownparameters and hyperparameters. The kernel method is a quick andstraightforward technique, but it cannot estimate solutions and theirderivatives, which must strictly adhere to physical laws. We propose amodel-embedded "one-step" Bayesian framework for joint inference of unknownparameters and hyperparameters by maximizing the marginal likelihood. Thisapproach models the solution and its derivatives using Gaussian processregression (GPR), taking into account smoothness and continuity properties, andtreats differential equations as constraints that can be naturally integratedinto the Bayesian framework in the linear case. Additionally, we prove theconvergence of the model-embedded Gaussian process regression (ME-GPR) fortheoretical development. Motivated by Taylor expansion, we introduce apiecewise first-order linearization strategy to handle nonlinear dynamicsystems. We derive estimates and confidence intervals, demonstrating that theyexhibit low bias and good coverage properties for both simulated models andreal data.
识别动力学系统(DS)是科学与工程领域的一项重要任务。传统方法需要多次调用动力学系统求解器,这使得基于似然法或最小二乘法的推理框架变得不切实际。为实现高效的参数推断,目前有两种最先进的技术,一种是用于建模的核方法,另一种是用于联合推断未知参数和超参数的 "一步法框架"。核方法是一种快速、直接的技术,但它无法估计解及其衍生物,因为它们必须严格遵守物理规律。我们提出了一种嵌入模型的 "一步式 "贝叶斯框架,通过最大化边际似然来联合推断未知参数和超参数。这种方法使用高斯过程回归(GPR)对解及其导数进行建模,同时考虑到平滑性和连续性特性,并将微分方程视为约束条件,在线性情况下可自然集成到贝叶斯框架中。此外,我们还证明了模型嵌入式高斯过程回归(ME-GPR)的收敛性,以促进理论发展。受泰勒展开的启发,我们引入了逐一一阶线性化策略来处理非线性动态系统。我们推导出了估计值和置信区间,证明它们对模拟模型和真实数据都具有低偏差和良好的覆盖特性。
{"title":"Model-Embedded Gaussian Process Regression for Parameter Estimation in Dynamical System","authors":"Ying Zhou, Jinglai Li, Xiang Zhou, Hongqiao Wang","doi":"arxiv-2409.11745","DOIUrl":"https://doi.org/arxiv-2409.11745","url":null,"abstract":"Identifying dynamical system (DS) is a vital task in science and engineering.\u0000Traditional methods require numerous calls to the DS solver, rendering\u0000likelihood-based or least-squares inference frameworks impractical. For\u0000efficient parameter inference, two state-of-the-art techniques are the kernel\u0000method for modeling and the \"one-step framework\" for jointly inferring unknown\u0000parameters and hyperparameters. The kernel method is a quick and\u0000straightforward technique, but it cannot estimate solutions and their\u0000derivatives, which must strictly adhere to physical laws. We propose a\u0000model-embedded \"one-step\" Bayesian framework for joint inference of unknown\u0000parameters and hyperparameters by maximizing the marginal likelihood. This\u0000approach models the solution and its derivatives using Gaussian process\u0000regression (GPR), taking into account smoothness and continuity properties, and\u0000treats differential equations as constraints that can be naturally integrated\u0000into the Bayesian framework in the linear case. Additionally, we prove the\u0000convergence of the model-embedded Gaussian process regression (ME-GPR) for\u0000theoretical development. Motivated by Taylor expansion, we introduce a\u0000piecewise first-order linearization strategy to handle nonlinear dynamic\u0000systems. We derive estimates and confidence intervals, demonstrating that they\u0000exhibit low bias and good coverage properties for both simulated models and\u0000real data.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Approach to Gaussian Processes Implementation 实现高斯过程的稳健方法
Pub Date : 2024-09-17 DOI: arxiv-2409.11577
Juliette Mukangango, Amanda Muyskens, Benjamin W. Priest
Gaussian Process (GP) regression is a flexible modeling technique used topredict outputs and to capture uncertainty in the predictions. However, the GPregression process becomes computationally intensive when the training spatialdataset has a large number of observations. To address this challenge, weintroduce a scalable GP algorithm, termed MuyGPs, which incorporates nearestneighbor and leave-one-out cross-validation during training. This approachenables the evaluation of large spatial datasets with state-of-the-art accuracyand speed in certain spatial problems. Despite these advantages, conventionalquadratic loss functions used in the MuyGPs optimization such as Root MeanSquared Error(RMSE), are highly influenced by outliers. We explore the behaviorof MuyGPs in cases involving outlying observations, and subsequently, develop arobust approach to handle and mitigate their impact. Specifically, we introducea novel leave-one-out loss function based on the pseudo-Huber function (LOOPH)that effectively accounts for outliers in large spatial datasets within theMuyGPs framework. Our simulation study shows that the "LOOPH" loss methodmaintains accuracy despite outlying observations, establishing MuyGPs as apowerful tool for mitigating unusual observation impacts in the large dataregime. In the analysis of U.S. ozone data, MuyGPs provides accuratepredictions and uncertainty quantification, demonstrating its utility inmanaging data anomalies. Through these efforts, we advance the understanding ofGP regression in spatial contexts.
高斯过程(GP)回归是一种灵活的建模技术,用于预测输出和捕捉预测中的不确定性。然而,当训练空间数据集具有大量观测数据时,GP 回归过程就会变得计算密集。为了应对这一挑战,我们引入了一种可扩展的 GP 算法,称为 MuyGPs,该算法在训练过程中结合了近邻和一出交叉验证。在某些空间问题上,这种方法能以最先进的精度和速度对大型空间数据集进行评估。尽管有这些优点,MuyGPs 优化中使用的传统二次损失函数(如均方根误差(RMSE))受异常值的影响很大。我们探讨了 MuyGPs 在涉及离群观测值的情况下的行为,并随后开发了一种稳健的方法来处理和减轻离群的影响。具体来说,我们在伪胡伯函数(LOOPH)的基础上引入了一种新的 "leave-one-out "损失函数,该函数能在 MuyGPs 框架内有效地考虑大型空间数据集中的离群值。我们的模拟研究表明,尽管观测数据离群,"LOOPH "损失方法仍能保持准确性,从而使 MuyGPs 成为在大型数据时代减轻异常观测影响的有力工具。在对美国臭氧数据的分析中,MuyGPs 提供了准确的预测和不确定性量化,证明了其在管理数据异常方面的实用性。通过这些努力,我们推进了对空间背景下 GP 回归的理解。
{"title":"A Robust Approach to Gaussian Processes Implementation","authors":"Juliette Mukangango, Amanda Muyskens, Benjamin W. Priest","doi":"arxiv-2409.11577","DOIUrl":"https://doi.org/arxiv-2409.11577","url":null,"abstract":"Gaussian Process (GP) regression is a flexible modeling technique used to\u0000predict outputs and to capture uncertainty in the predictions. However, the GP\u0000regression process becomes computationally intensive when the training spatial\u0000dataset has a large number of observations. To address this challenge, we\u0000introduce a scalable GP algorithm, termed MuyGPs, which incorporates nearest\u0000neighbor and leave-one-out cross-validation during training. This approach\u0000enables the evaluation of large spatial datasets with state-of-the-art accuracy\u0000and speed in certain spatial problems. Despite these advantages, conventional\u0000quadratic loss functions used in the MuyGPs optimization such as Root Mean\u0000Squared Error(RMSE), are highly influenced by outliers. We explore the behavior\u0000of MuyGPs in cases involving outlying observations, and subsequently, develop a\u0000robust approach to handle and mitigate their impact. Specifically, we introduce\u0000a novel leave-one-out loss function based on the pseudo-Huber function (LOOPH)\u0000that effectively accounts for outliers in large spatial datasets within the\u0000MuyGPs framework. Our simulation study shows that the \"LOOPH\" loss method\u0000maintains accuracy despite outlying observations, establishing MuyGPs as a\u0000powerful tool for mitigating unusual observation impacts in the large data\u0000regime. In the analysis of U.S. ozone data, MuyGPs provides accurate\u0000predictions and uncertainty quantification, demonstrating its utility in\u0000managing data anomalies. Through these efforts, we advance the understanding of\u0000GP regression in spatial contexts.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of the entropy source on Monte Carlo simulations 熵源对蒙特卡罗模拟的影响
Pub Date : 2024-09-17 DOI: arxiv-2409.11539
Anton Lebedev, Annika Möslein, Olha I. Yaman, Del Rajan, Philip Intallura
In this paper we show how different sources of random numbers influence theoutcomes of Monte Carlo simulations. We compare industry-standard pseudo-randomnumber generators (PRNGs) to a quantum random number generator (QRNG) and show,using examples of Monte Carlo simulations with exact solutions, that the QRNGyields statistically significantly better approximations than the PRNGs. Ourresults demonstrate that higher accuracy can be achieved in the commonly knownMonte Carlo method for approximating $pi$. For Buffon's needle experiment, wefurther quantify a potential reduction in approximation errors by up to$1.89times$ for optimal parameter choices when using a QRNG and a reduction ofthe sample size by $sim 8times$ for sub-optimal parameter choices. Weattribute the observed higher accuracy to the underlying differences in therandom sampling, where a uniformity analysis reveals a tendency of the QRNG tosample the solution space more homogeneously. Additionally, we compare theresults obtained with the QRNG and PRNG in solving the non-linear stochasticSchr"odinger equation, benchmarked against the analytical solution. We observehigher accuracy of the approximations of the QRNG and demonstrate thatequivalent results can be achieved at 1/3 to 1/10-th of the costs.
在本文中,我们展示了不同来源的随机数如何影响蒙特卡罗模拟的结果。我们将行业标准的伪随机数发生器(PRNG)与量子随机数发生器(QRNG)进行了比较,并通过具有精确解的蒙特卡罗模拟实例表明,QRNG在统计上产生的近似结果明显优于PRNG。我们的研究结果表明,用通常已知的蒙特卡罗方法逼近 $pi$ 可以获得更高的精度。对于布丰的针实验,我们进一步量化了使用 QRNG 时,最优参数选择的近似误差可能减少高达 $1.89times$,次优参数选择的样本量可能减少 $sim 8times$。我们将观察到的更高精度归因于随机抽样的潜在差异,均匀性分析表明 QRNG 倾向于更均匀地对解空间进行抽样。此外,我们还比较了 QRNG 和 PRNG 在求解非线性随机薛定谔方程时获得的结果,并以解析解作为基准。我们发现 QRNG 的近似精度更高,并证明只需花费 1/3 到 1/10 的成本就能获得相同的结果。
{"title":"Effects of the entropy source on Monte Carlo simulations","authors":"Anton Lebedev, Annika Möslein, Olha I. Yaman, Del Rajan, Philip Intallura","doi":"arxiv-2409.11539","DOIUrl":"https://doi.org/arxiv-2409.11539","url":null,"abstract":"In this paper we show how different sources of random numbers influence the\u0000outcomes of Monte Carlo simulations. We compare industry-standard pseudo-random\u0000number generators (PRNGs) to a quantum random number generator (QRNG) and show,\u0000using examples of Monte Carlo simulations with exact solutions, that the QRNG\u0000yields statistically significantly better approximations than the PRNGs. Our\u0000results demonstrate that higher accuracy can be achieved in the commonly known\u0000Monte Carlo method for approximating $pi$. For Buffon's needle experiment, we\u0000further quantify a potential reduction in approximation errors by up to\u0000$1.89times$ for optimal parameter choices when using a QRNG and a reduction of\u0000the sample size by $sim 8times$ for sub-optimal parameter choices. We\u0000attribute the observed higher accuracy to the underlying differences in the\u0000random sampling, where a uniformity analysis reveals a tendency of the QRNG to\u0000sample the solution space more homogeneously. Additionally, we compare the\u0000results obtained with the QRNG and PRNG in solving the non-linear stochastic\u0000Schr\"odinger equation, benchmarked against the analytical solution. We observe\u0000higher accuracy of the approximations of the QRNG and demonstrate that\u0000equivalent results can be achieved at 1/3 to 1/10-th of the costs.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HJ-sampler: A Bayesian sampler for inverse problems of a stochastic process by leveraging Hamilton-Jacobi PDEs and score-based generative models HJ-采样器:利用汉密尔顿-雅可比 PDE 和基于分数的生成模型解决随机过程逆问题的贝叶斯采样器
Pub Date : 2024-09-15 DOI: arxiv-2409.09614
Tingwei Meng, Zongren Zou, Jérôme Darbon, George Em Karniadakis
The interplay between stochastic processes and optimal control has beenextensively explored in the literature. With the recent surge in the use ofdiffusion models, stochastic processes have increasingly been applied to samplegeneration. This paper builds on the log transform, known as the Cole-Hopftransform in Brownian motion contexts, and extends it within a more abstractframework that includes a linear operator. Within this framework, we found thatthe well-known relationship between the Cole-Hopf transform and optimaltransport is a particular instance where the linear operator acts as theinfinitesimal generator of a stochastic process. We also introduce a novelscenario where the linear operator is the adjoint of the generator, linking toBayesian inference under specific initial and terminal conditions. Leveragingthis theoretical foundation, we develop a new algorithm, named the HJ-sampler,for Bayesian inference for the inverse problem of a stochastic differentialequation with given terminal observations. The HJ-sampler involves two stages:(1) solving the viscous Hamilton-Jacobi partial differential equations, and (2)sampling from the associated stochastic optimal control problem. Our proposedalgorithm naturally allows for flexibility in selecting the numerical solverfor viscous HJ PDEs. We introduce two variants of the solver: theRiccati-HJ-sampler, based on the Riccati method, and the SGM-HJ-sampler, whichutilizes diffusion models. We demonstrate the effectiveness and flexibility ofthe proposed methods by applying them to solve Bayesian inverse problemsinvolving various stochastic processes and prior distributions, includingapplications that address model misspecifications and quantifying modeluncertainty.
文献中对随机过程与最优控制之间的相互作用进行了广泛的探讨。随着最近扩散模型使用的激增,随机过程越来越多地被应用于样本生成。本文以对数变换(在布朗运动中称为 Cole-Hopft 变换)为基础,在包含线性算子的更抽象框架内对其进行了扩展。在这一框架内,我们发现科尔-霍普夫变换与最优传输之间的著名关系是线性算子充当随机过程无限小生成器的一个特殊实例。我们还介绍了一种新颖的情况,即线性算子是生成器的邻接,在特定的初始和终端条件下与贝叶斯推理相联系。利用这一理论基础,我们开发了一种新算法,命名为 HJ-取样器,用于给定终端观测值的随机微分方程逆问题的贝叶斯推断。HJ 采样器包括两个阶段:(1)求解粘性汉密尔顿-雅可比偏微分方程;(2)从相关的随机最优控制问题中采样。我们提出的算法自然允许灵活选择粘性 HJ 偏微分方程的数值求解器。我们介绍了求解器的两种变体:基于里卡提方法的里卡提-HJ 采样器和利用扩散模型的 SGM-HJ 采样器。我们将所提出的方法应用于解决涉及各种随机过程和先验分布的贝叶斯逆问题,包括解决模型错误规范和量化模型不确定性的应用,从而证明了这些方法的有效性和灵活性。
{"title":"HJ-sampler: A Bayesian sampler for inverse problems of a stochastic process by leveraging Hamilton-Jacobi PDEs and score-based generative models","authors":"Tingwei Meng, Zongren Zou, Jérôme Darbon, George Em Karniadakis","doi":"arxiv-2409.09614","DOIUrl":"https://doi.org/arxiv-2409.09614","url":null,"abstract":"The interplay between stochastic processes and optimal control has been\u0000extensively explored in the literature. With the recent surge in the use of\u0000diffusion models, stochastic processes have increasingly been applied to sample\u0000generation. This paper builds on the log transform, known as the Cole-Hopf\u0000transform in Brownian motion contexts, and extends it within a more abstract\u0000framework that includes a linear operator. Within this framework, we found that\u0000the well-known relationship between the Cole-Hopf transform and optimal\u0000transport is a particular instance where the linear operator acts as the\u0000infinitesimal generator of a stochastic process. We also introduce a novel\u0000scenario where the linear operator is the adjoint of the generator, linking to\u0000Bayesian inference under specific initial and terminal conditions. Leveraging\u0000this theoretical foundation, we develop a new algorithm, named the HJ-sampler,\u0000for Bayesian inference for the inverse problem of a stochastic differential\u0000equation with given terminal observations. The HJ-sampler involves two stages:\u0000(1) solving the viscous Hamilton-Jacobi partial differential equations, and (2)\u0000sampling from the associated stochastic optimal control problem. Our proposed\u0000algorithm naturally allows for flexibility in selecting the numerical solver\u0000for viscous HJ PDEs. We introduce two variants of the solver: the\u0000Riccati-HJ-sampler, based on the Riccati method, and the SGM-HJ-sampler, which\u0000utilizes diffusion models. We demonstrate the effectiveness and flexibility of\u0000the proposed methods by applying them to solve Bayesian inverse problems\u0000involving various stochastic processes and prior distributions, including\u0000applications that address model misspecifications and quantifying model\u0000uncertainty.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"171 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing Shape-Graph Complexity with Application to Classification of Retinal Blood Vessels and Neurons 降低形状图复杂性并应用于视网膜血管和神经元分类
Pub Date : 2024-09-13 DOI: arxiv-2409.09168
Benjamin Beaudett, Anuj Srivastava
Shape graphs are complex geometrical structures commonly found in biologicaland anatomical systems. A shape graph is a collection of nodes, some connectedby curvilinear edges with arbitrary shapes. Their high complexity stems fromthe large number of nodes and edges and the complex shapes of edges. With aneye for statistical analysis, one seeks low-complexity representations thatretain as much of the global structures of the original shape graphs aspossible. This paper develops a framework for reducing graph complexity usinghierarchical clustering procedures that replace groups of nodes and edges withtheir simpler representatives. It demonstrates this framework using graphs ofretinal blood vessels in two dimensions and neurons in three dimensions. Thepaper also presents experiments on classifications of shape graphs usingprogressively reduced levels of graph complexity. The accuracy of diseasedetection in retinal blood vessels drops quickly when the complexity isreduced, with accuracy loss particularly associated with discarding terminaledges. Accuracy in identifying neural cell types remains stable with complexityreduction.
形状图是生物和解剖系统中常见的复杂几何结构。形状图是节点的集合,其中一些节点由任意形状的曲线边连接。它们的高复杂性源于大量的节点和边以及边的复杂形状。为了进行统计分析,我们需要尽可能多地保留原始形状图全局结构的低复杂度表示法。本文利用层次聚类程序开发了一个降低图形复杂性的框架,该程序用更简单的代表来代替节点和边的组。本文使用二维视网膜血管图和三维神经元图演示了这一框架。论文还介绍了使用逐步降低的图形复杂度对形状图进行分类的实验。当复杂度降低时,视网膜血管病变检测的准确度迅速下降,尤其是在舍弃末端边缘时,准确度下降尤为明显。神经细胞类型的识别准确率随着复杂度的降低而保持稳定。
{"title":"Reducing Shape-Graph Complexity with Application to Classification of Retinal Blood Vessels and Neurons","authors":"Benjamin Beaudett, Anuj Srivastava","doi":"arxiv-2409.09168","DOIUrl":"https://doi.org/arxiv-2409.09168","url":null,"abstract":"Shape graphs are complex geometrical structures commonly found in biological\u0000and anatomical systems. A shape graph is a collection of nodes, some connected\u0000by curvilinear edges with arbitrary shapes. Their high complexity stems from\u0000the large number of nodes and edges and the complex shapes of edges. With an\u0000eye for statistical analysis, one seeks low-complexity representations that\u0000retain as much of the global structures of the original shape graphs as\u0000possible. This paper develops a framework for reducing graph complexity using\u0000hierarchical clustering procedures that replace groups of nodes and edges with\u0000their simpler representatives. It demonstrates this framework using graphs of\u0000retinal blood vessels in two dimensions and neurons in three dimensions. The\u0000paper also presents experiments on classifications of shape graphs using\u0000progressively reduced levels of graph complexity. The accuracy of disease\u0000detection in retinal blood vessels drops quickly when the complexity is\u0000reduced, with accuracy loss particularly associated with discarding terminal\u0000edges. Accuracy in identifying neural cell types remains stable with complexity\u0000reduction.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"198 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical Finite Elements via Interacting Particle Langevin Dynamics 通过相互作用粒子朗格文动力学实现统计有限元
Pub Date : 2024-09-11 DOI: arxiv-2409.07101
Alex Glyn-Davies, Connor Duffin, Ieva Kazlauskaite, Mark Girolami, Ö. Deniz Akyildiz
In this paper, we develop a class of interacting particle Langevin algorithmsto solve inverse problems for partial differential equations (PDEs). Inparticular, we leverage the statistical finite elements (statFEM) formulationto obtain a finite-dimensional latent variable statistical model where theparameter is that of the (discretised) forward map and the latent variable isthe statFEM solution of the PDE which is assumed to be partially observed. Wethen adapt a recently proposed expectation-maximisation like scheme,interacting particle Langevin algorithm (IPLA), for this problem and obtain ajoint estimation procedure for the parameters and the latent variables. Weconsider three main examples: (i) estimating the forcing for linear PoissonPDE, (ii) estimating the forcing for nonlinear Poisson PDE, and (iii)estimating diffusivity for linear Poisson PDE. We provide computationalcomplexity estimates for forcing estimation in the linear case. We also providecomprehensive numerical experiments and preconditioning strategies thatsignificantly improve the performance, showing that the proposed class ofmethods can be the choice for parameter inference in PDE models.
在本文中,我们开发了一类交互粒子朗文算法来解决偏微分方程(PDE)的逆问题。特别是,我们利用统计有限元(statFEM)公式获得了一个有限维的潜变量统计模型,其中参数是(离散化)前向映射的参数,潜变量是假设为部分观测的偏微分方程的 statFEM 解。针对这一问题,我们采用了最近提出的类似期望最大化的方案--交互粒子朗文算法(IPLA),并获得了参数和潜变量的联合估计程序。我们考虑了三个主要例子:(i) 估计线性泊松 PDE 的强迫;(ii) 估计非线性泊松 PDE 的强迫;(iii) 估计线性泊松 PDE 的扩散性。我们为线性情况下的强迫估计提供了计算复杂性估计。我们还提供了可显著提高性能的综合数值实验和预处理策略,表明所提出的方法可以作为 PDE 模型参数推断的选择。
{"title":"Statistical Finite Elements via Interacting Particle Langevin Dynamics","authors":"Alex Glyn-Davies, Connor Duffin, Ieva Kazlauskaite, Mark Girolami, Ö. Deniz Akyildiz","doi":"arxiv-2409.07101","DOIUrl":"https://doi.org/arxiv-2409.07101","url":null,"abstract":"In this paper, we develop a class of interacting particle Langevin algorithms\u0000to solve inverse problems for partial differential equations (PDEs). In\u0000particular, we leverage the statistical finite elements (statFEM) formulation\u0000to obtain a finite-dimensional latent variable statistical model where the\u0000parameter is that of the (discretised) forward map and the latent variable is\u0000the statFEM solution of the PDE which is assumed to be partially observed. We\u0000then adapt a recently proposed expectation-maximisation like scheme,\u0000interacting particle Langevin algorithm (IPLA), for this problem and obtain a\u0000joint estimation procedure for the parameters and the latent variables. We\u0000consider three main examples: (i) estimating the forcing for linear Poisson\u0000PDE, (ii) estimating the forcing for nonlinear Poisson PDE, and (iii)\u0000estimating diffusivity for linear Poisson PDE. We provide computational\u0000complexity estimates for forcing estimation in the linear case. We also provide\u0000comprehensive numerical experiments and preconditioning strategies that\u0000significantly improve the performance, showing that the proposed class of\u0000methods can be the choice for parameter inference in PDE models.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph sub-sampling for divide-and-conquer algorithms in large networks 大型网络中分而治之算法的图子抽样
Pub Date : 2024-09-11 DOI: arxiv-2409.06994
Eric Yanchenko
As networks continue to increase in size, current methods must be capable ofhandling large numbers of nodes and edges in order to be practically relevant.Instead of working directly with the entire (large) network, analyzingsub-networks has become a popular approach. Due to a network's inherentinter-connectedness, sub-sampling is not a trivial task. While this problem hasgained attention in recent years, it has not received sufficient attention fromthe statistics community. In this work, we provide a thorough comparison ofseven graph sub-sampling algorithms by applying them to divide-and-conqueralgorithms for community structure and core-periphery (CP) structure. Afterdiscussing the various algorithms and sub-sampling routines, we derivetheoretical results for the mis-classification rate of the divide-and-conqueralgorithm for CP structure under various sub-sampling schemes. We then performextensive experiments on both simulated and real-world data to compare thevarious methods. For the community detection task, we found that sampling nodesuniformly at random yields the best performance. For CP structure on the otherhand, there was no single winner, but algorithms which sampled core nodes at ahigher rate consistently outperformed other sampling routines, e.g., randomedge sampling and random walk sampling. The varying performance of the samplingalgorithms on different tasks demonstrates the importance of carefullyselecting a sub-sampling routine for the specific application.
随着网络规模的不断扩大,当前的方法必须能够处理大量的节点和边,才能具有实际意义。由于网络固有的相互连接性,子采样并非易事。虽然这个问题近年来受到了越来越多的关注,但却没有得到统计学界的足够重视。在这项工作中,我们通过将七种图子采样算法应用于群落结构和核心-外围(CP)结构的分而治之算法,对它们进行了全面的比较。在讨论了各种算法和子采样例程之后,我们得出了在各种子采样方案下,CP 结构的分而萃算法误分类率的理论结果。然后,我们在模拟数据和实际数据上进行了大量实验,对各种方法进行了比较。对于群落检测任务,我们发现随机均匀采样节点的性能最好。另一方面,在 CP 结构方面,虽然没有单一的优胜者,但以更高的速率对核心节点进行采样的算法始终优于其他采样程序,例如随机边缘采样和随机漫步采样。采样算法在不同任务上的不同表现表明,针对特定应用仔细选择子采样例程非常重要。
{"title":"Graph sub-sampling for divide-and-conquer algorithms in large networks","authors":"Eric Yanchenko","doi":"arxiv-2409.06994","DOIUrl":"https://doi.org/arxiv-2409.06994","url":null,"abstract":"As networks continue to increase in size, current methods must be capable of\u0000handling large numbers of nodes and edges in order to be practically relevant.\u0000Instead of working directly with the entire (large) network, analyzing\u0000sub-networks has become a popular approach. Due to a network's inherent\u0000inter-connectedness, sub-sampling is not a trivial task. While this problem has\u0000gained attention in recent years, it has not received sufficient attention from\u0000the statistics community. In this work, we provide a thorough comparison of\u0000seven graph sub-sampling algorithms by applying them to divide-and-conquer\u0000algorithms for community structure and core-periphery (CP) structure. After\u0000discussing the various algorithms and sub-sampling routines, we derive\u0000theoretical results for the mis-classification rate of the divide-and-conquer\u0000algorithm for CP structure under various sub-sampling schemes. We then perform\u0000extensive experiments on both simulated and real-world data to compare the\u0000various methods. For the community detection task, we found that sampling nodes\u0000uniformly at random yields the best performance. For CP structure on the other\u0000hand, there was no single winner, but algorithms which sampled core nodes at a\u0000higher rate consistently outperformed other sampling routines, e.g., random\u0000edge sampling and random walk sampling. The varying performance of the sampling\u0000algorithms on different tasks demonstrates the importance of carefully\u0000selecting a sub-sampling routine for the specific application.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing VarLiNGAM for Scalable and Efficient Time Series Causal Discovery 优化 VarLiNGAM 以实现可扩展的高效时间序列因果关系发现
Pub Date : 2024-09-09 DOI: arxiv-2409.05500
Ziyang Jiao, Ce Guo, Wayne Luk
Causal discovery is designed to identify causal relationships in data, a taskthat has become increasingly complex due to the computational demands oftraditional methods such as VarLiNGAM, which combines Vector AutoregressiveModel with Linear Non-Gaussian Acyclic Model for time series data. This study is dedicated to optimising causal discovery specifically for timeseries data, which is common in practical applications. Time series causaldiscovery is particularly challenging due to the need to account for temporaldependencies and potential time lag effects. By designing a specialised datasetgenerator and reducing the computational complexity of the VarLiNGAM model from( O(m^3 cdot n) ) to ( O(m^3 + m^2 cdot n) ), this study significantlyimproves the feasibility of processing large datasets. The proposed methodshave been validated on advanced computational platforms and tested acrosssimulated, real-world, and large-scale datasets, showcasing enhanced efficiencyand performance. The optimised algorithm achieved 7 to 13 times speedupcompared with the original algorithm and around 4.5 times speedup compared withthe GPU-accelerated version on large-scale datasets with feature sizes between200 and 400. Our methods aim to push the boundaries of current causal discoverycapabilities, making them more robust, scalable, and applicable to real-worldscenarios, thus facilitating breakthroughs in various fields such as healthcareand finance.
因果发现的目的是识别数据中的因果关系,由于传统方法(如针对时间序列数据的矢量自回归模型与线性非高斯循环模型相结合的 VarLiNGAM)的计算需求,这项任务变得越来越复杂。本研究致力于优化时间序列数据的因果发现,这在实际应用中很常见。由于需要考虑时间依赖性和潜在的时滞效应,时间序列因果发现尤其具有挑战性。通过设计专门的数据集生成器,并将 VarLiNGAM 模型的计算复杂度从( O(m^3 cdot n) )降低到( O(m^3 + m^2 cdot n) ),本研究大大提高了处理大型数据集的可行性。提出的方法在先进的计算平台上得到了验证,并在模拟、真实世界和大规模数据集上进行了测试,展示了更高的效率和性能。在特征大小介于 200 到 400 之间的大规模数据集上,优化算法的速度比原始算法提高了 7 到 13 倍,比 GPU 加速版本提高了约 4.5 倍。我们的方法旨在突破当前因果发现能力的界限,使其更加稳健、可扩展,并适用于现实世界的各种场景,从而促进医疗保健和金融等各个领域的突破。
{"title":"Optimizing VarLiNGAM for Scalable and Efficient Time Series Causal Discovery","authors":"Ziyang Jiao, Ce Guo, Wayne Luk","doi":"arxiv-2409.05500","DOIUrl":"https://doi.org/arxiv-2409.05500","url":null,"abstract":"Causal discovery is designed to identify causal relationships in data, a task\u0000that has become increasingly complex due to the computational demands of\u0000traditional methods such as VarLiNGAM, which combines Vector Autoregressive\u0000Model with Linear Non-Gaussian Acyclic Model for time series data. This study is dedicated to optimising causal discovery specifically for time\u0000series data, which is common in practical applications. Time series causal\u0000discovery is particularly challenging due to the need to account for temporal\u0000dependencies and potential time lag effects. By designing a specialised dataset\u0000generator and reducing the computational complexity of the VarLiNGAM model from\u0000( O(m^3 cdot n) ) to ( O(m^3 + m^2 cdot n) ), this study significantly\u0000improves the feasibility of processing large datasets. The proposed methods\u0000have been validated on advanced computational platforms and tested across\u0000simulated, real-world, and large-scale datasets, showcasing enhanced efficiency\u0000and performance. The optimised algorithm achieved 7 to 13 times speedup\u0000compared with the original algorithm and around 4.5 times speedup compared with\u0000the GPU-accelerated version on large-scale datasets with feature sizes between\u0000200 and 400. Our methods aim to push the boundaries of current causal discovery\u0000capabilities, making them more robust, scalable, and applicable to real-world\u0000scenarios, thus facilitating breakthroughs in various fields such as healthcare\u0000and finance.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"113 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Best Linear Unbiased Estimate from Privatized Histograms 从私有化直方图得出最佳线性无偏估计值
Pub Date : 2024-09-06 DOI: arxiv-2409.04387
Jordan Awan, Adam Edwards, Paul Bartholomew, Andrew Sillers
In differential privacy (DP) mechanisms, it can be beneficial to release"redundant" outputs, in the sense that a quantity can be estimated by combiningdifferent combinations of privatized values. Indeed, this structure is presentin the DP 2020 Decennial Census products published by the U.S. Census Bureau.With this structure, the DP output can be improved by enforcingself-consistency (i.e., estimators obtained by combining different valuesresult in the same estimate) and we show that the minimum variance processingis a linear projection. However, standard projection algorithms are toocomputationally expensive in terms of both memory and execution time forapplications such as the Decennial Census. We propose the Scalable EfficientAlgorithm for Best Linear Unbiased Estimate (SEA BLUE), based on a two stepprocess of aggregation and differencing that 1) enforces self-consistencythrough a linear and unbiased procedure, 2) is computationally and memoryefficient, 3) achieves the minimum variance solution under certain structuralassumptions, and 4) is empirically shown to be robust to violations of thesestructural assumptions. We propose three methods of calculating confidenceintervals from our estimates, under various assumptions. We apply SEA BLUE totwo 2010 Census demonstration products, illustrating its scalability andvalidity.
在差分隐私(DP)机制中,释放 "冗余 "输出可能是有益的,即可以通过组合不同的私有化值组合来估算一个数量。事实上,美国人口普查局发布的 DP 2020 十年期人口普查产品中就有这种结构。有了这种结构,DP 输出可以通过强制执行自一致性来改进(即通过组合不同值获得的估计值结果相同),我们证明最小方差处理是一种线性投影。然而,标准的投影算法在内存和执行时间方面都过于昂贵,不适合十年一次的人口普查等应用。我们提出了可扩展的高效最佳线性无偏估计算法(SEA BLUE),该算法基于聚合和差分两步过程,1)通过线性无偏程序实现自洽性;2)计算和内存效率高;3)在特定结构假设下实现最小方差解;4)经验表明对违反结构假设的情况具有鲁棒性。我们提出了三种在不同假设条件下计算估计值置信区间的方法。我们将 SEA BLUE 应用于两个 2010 年人口普查示范产品,以说明其可扩展性和有效性。
{"title":"Best Linear Unbiased Estimate from Privatized Histograms","authors":"Jordan Awan, Adam Edwards, Paul Bartholomew, Andrew Sillers","doi":"arxiv-2409.04387","DOIUrl":"https://doi.org/arxiv-2409.04387","url":null,"abstract":"In differential privacy (DP) mechanisms, it can be beneficial to release\u0000\"redundant\" outputs, in the sense that a quantity can be estimated by combining\u0000different combinations of privatized values. Indeed, this structure is present\u0000in the DP 2020 Decennial Census products published by the U.S. Census Bureau.\u0000With this structure, the DP output can be improved by enforcing\u0000self-consistency (i.e., estimators obtained by combining different values\u0000result in the same estimate) and we show that the minimum variance processing\u0000is a linear projection. However, standard projection algorithms are too\u0000computationally expensive in terms of both memory and execution time for\u0000applications such as the Decennial Census. We propose the Scalable Efficient\u0000Algorithm for Best Linear Unbiased Estimate (SEA BLUE), based on a two step\u0000process of aggregation and differencing that 1) enforces self-consistency\u0000through a linear and unbiased procedure, 2) is computationally and memory\u0000efficient, 3) achieves the minimum variance solution under certain structural\u0000assumptions, and 4) is empirically shown to be robust to violations of these\u0000structural assumptions. We propose three methods of calculating confidence\u0000intervals from our estimates, under various assumptions. We apply SEA BLUE to\u0000two 2010 Census demonstration products, illustrating its scalability and\u0000validity.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional logistic individual-level models of spatial infectious disease dynamics 空间传染病动态的条件逻辑个体水平模型
Pub Date : 2024-09-04 DOI: arxiv-2409.02353
Tahmina Akter, Rob Deardon
Here, we introduce a novel framework for modelling the spatiotemporaldynamics of disease spread known as conditional logistic individual-levelmodels (CL-ILM's). This framework alleviates much of the computational burdenassociated with traditional spatiotemporal individual-level models forepidemics, and facilitates the use of standard software for fitting logisticmodels when analysing spatiotemporal disease patterns. The models can be fittedin either a frequentist or Bayesian framework. Here, we apply the new spatialCL-ILM to both simulated and semi-real data from the UK 2001 foot-and-mouthdisease epidemic.
在这里,我们介绍了一种新的疾病传播时空动态建模框架,即条件逻辑个体水平模型(CL-ILM)。该框架减轻了传统流行病时空个体水平模型的大部分计算负担,便于在分析疾病时空模式时使用标准软件拟合逻辑模型。这些模型可以在频数主义或贝叶斯框架内拟合。在此,我们将新的空间CL-ILM 应用于英国 2001 年口蹄疫疫情的模拟和半真实数据。
{"title":"Conditional logistic individual-level models of spatial infectious disease dynamics","authors":"Tahmina Akter, Rob Deardon","doi":"arxiv-2409.02353","DOIUrl":"https://doi.org/arxiv-2409.02353","url":null,"abstract":"Here, we introduce a novel framework for modelling the spatiotemporal\u0000dynamics of disease spread known as conditional logistic individual-level\u0000models (CL-ILM's). This framework alleviates much of the computational burden\u0000associated with traditional spatiotemporal individual-level models for\u0000epidemics, and facilitates the use of standard software for fitting logistic\u0000models when analysing spatiotemporal disease patterns. The models can be fitted\u0000in either a frequentist or Bayesian framework. Here, we apply the new spatial\u0000CL-ILM to both simulated and semi-real data from the UK 2001 foot-and-mouth\u0000disease epidemic.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - STAT - Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1