首页 > 最新文献

arXiv: Computation最新文献

英文 中文
Sequential quasi-Monte Carlo: Introduction for Non-Experts, Dimension Reduction, Application to Partly Observed Diffusion Processes 序贯拟蒙特卡罗:非专家介绍,降维,部分观测扩散过程的应用
Pub Date : 2017-06-16 DOI: 10.1007/978-3-319-91436-7_5
N. Chopin, Mathieu Gerber
{"title":"Sequential quasi-Monte Carlo: Introduction for Non-Experts, Dimension Reduction, Application to Partly Observed Diffusion Processes","authors":"N. Chopin, Mathieu Gerber","doi":"10.1007/978-3-319-91436-7_5","DOIUrl":"https://doi.org/10.1007/978-3-319-91436-7_5","url":null,"abstract":"","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74853922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optimal Experimental Design Using A Consistent Bayesian Approach 使用一致贝叶斯方法的最优实验设计
Pub Date : 2017-05-25 DOI: 10.1115/1.4037457
Scott N. Walsh, T. Wildey, J. Jakeman
We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set a potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative PDE-based models.
我们考虑利用计算模型来指导实验数据的最佳获取,以告知模型输入参数的随机描述。我们的公式基于最近开发的解决随机逆问题的一致贝叶斯方法,该方法寻求与模型和数据一致的后验概率密度,即后验的推进(通过计算模型)几乎在任何地方都与观测到的密度相匹配。给定一组潜在的观测值,我们的最佳实验设计(OED)寻求从模型参数的先验概率密度中最大化预期信息增益的观测值或观测值集。我们讨论了观测密度空间的表征和一种计算效率高的方法来重新调整观测密度以满足一致贝叶斯方法的基本假设。给出了数值结果,将我们的方法与使用经典/统计贝叶斯方法的现有OED方法进行比较,并在一组具有代表性的基于pde的模型上展示了我们的OED。
{"title":"Optimal Experimental Design Using A Consistent Bayesian Approach","authors":"Scott N. Walsh, T. Wildey, J. Jakeman","doi":"10.1115/1.4037457","DOIUrl":"https://doi.org/10.1115/1.4037457","url":null,"abstract":"We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set a potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative PDE-based models.","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"2006 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82551928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Parsimonious Adaptive Rejection Sampling 简约自适应拒绝抽样
Pub Date : 2017-05-01 DOI: 10.1049/EL.2017.1711
Luca Martino
Monte Carlo (MC) methods have become very popular in signal processing during the past decades. The adaptive rejection sampling (ARS) algorithms are well-known MC technique which draw efficiently independent samples from univariate target densities. The ARS schemes yield a sequence of proposal functions that converge toward the target, so that the probability of accepting a sample approaches one. However, sampling from the proposal pdf becomes more computationally demanding each time it is updated. We propose the Parsimonious Adaptive Rejection Sampling (PARS) method, where an efficient trade-off between acceptance rate and proposal complexity is obtained. Thus, the resulting algorithm is faster than the standard ARS approach.
在过去的几十年中,蒙特卡罗(MC)方法在信号处理中变得非常流行。自适应抑制采样(ARS)算法是一种从单变量目标密度中提取独立样本的算法。ARS方案产生一系列向目标收敛的建议函数,因此接受样本的概率接近于1。但是,每次更新提案pdf时,从提案pdf中采样的计算要求会更高。我们提出了一种简化自适应拒绝抽样(PARS)方法,该方法在接受率和提案复杂性之间得到了有效的权衡。因此,生成的算法比标准的ARS方法快。
{"title":"Parsimonious Adaptive Rejection Sampling","authors":"Luca Martino","doi":"10.1049/EL.2017.1711","DOIUrl":"https://doi.org/10.1049/EL.2017.1711","url":null,"abstract":"Monte Carlo (MC) methods have become very popular in signal processing during the past decades. The adaptive rejection sampling (ARS) algorithms are well-known MC technique which draw efficiently independent samples from univariate target densities. The ARS schemes yield a sequence of proposal functions that converge toward the target, so that the probability of accepting a sample approaches one. However, sampling from the proposal pdf becomes more computationally demanding each time it is updated. We propose the Parsimonious Adaptive Rejection Sampling (PARS) method, where an efficient trade-off between acceptance rate and proposal complexity is obtained. Thus, the resulting algorithm is faster than the standard ARS approach.","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"82 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81044148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Estimating Spatial Econometrics Models with Integrated Nested Laplace Approximation 利用积分嵌套拉普拉斯近似估计空间计量经济学模型
Pub Date : 2017-03-03 DOI: 10.3390/MATH9172044
V. Gómez‐Rubio, R. Bivand, H. Rue
Integrated Nested Laplace Approximation provides a fast and effective method for marginal inference on Bayesian hierarchical models. This methodology has been implemented in the R-INLA package which permits INLA to be used from within R statistical software. Although INLA is implemented as a general methodology, its use in practice is limited to the models implemented in the R-INLA package. Spatial autoregressive models are widely used in spatial econometrics but have until now been missing from the R-INLA package. In this paper, we describe the implementation and application of a new class of latent models in INLA made available through R-INLA. This new latent class implements a standard spatial lag model, which is widely used and that can be used to build more complex models in spatial econometrics. The implementation of this latent model in R-INLA also means that all the other features of INLA can be used for model fitting, model selection and inference in spatial econometrics, as will be shown in this paper. Finally, we will illustrate the use of this new latent model and its applications with two datasets based on Gaussian and binary outcomes.
集成嵌套拉普拉斯近似为贝叶斯层次模型的边缘推理提供了一种快速有效的方法。该方法已在R-INLA软件包中实现,该软件包允许在R统计软件中使用INLA。尽管INLA是作为一种通用方法实现的,但它在实践中的使用仅限于在R-INLA包中实现的模型。空间自回归模型在空间计量经济学中得到了广泛的应用,但直到现在还没有从R-INLA包中缺失。在本文中,我们描述了通过R-INLA提供的一类新的潜在模型在INLA中的实现和应用。这个新的潜在类实现了一个标准的空间滞后模型,该模型在空间计量经济学中被广泛使用,可以用来建立更复杂的模型。这一潜在模型在R-INLA中的实现也意味着INLA的所有其他特征都可以用于空间计量经济学中的模型拟合、模型选择和推理,本文将对此进行说明。最后,我们将用基于高斯和二元结果的两个数据集来说明这种新的潜在模型的使用及其应用。
{"title":"Estimating Spatial Econometrics Models with Integrated Nested Laplace Approximation","authors":"V. Gómez‐Rubio, R. Bivand, H. Rue","doi":"10.3390/MATH9172044","DOIUrl":"https://doi.org/10.3390/MATH9172044","url":null,"abstract":"Integrated Nested Laplace Approximation provides a fast and effective method for marginal inference on Bayesian hierarchical models. This methodology has been implemented in the R-INLA package which permits INLA to be used from within R statistical software. Although INLA is implemented as a general methodology, its use in practice is limited to the models implemented in the R-INLA package. \u0000Spatial autoregressive models are widely used in spatial econometrics but have until now been missing from the R-INLA package. In this paper, we describe the implementation and application of a new class of latent models in INLA made available through R-INLA. This new latent class implements a standard spatial lag model, which is widely used and that can be used to build more complex models in spatial econometrics. \u0000The implementation of this latent model in R-INLA also means that all the other features of INLA can be used for model fitting, model selection and inference in spatial econometrics, as will be shown in this paper. Finally, we will illustrate the use of this new latent model and its applications with two datasets based on Gaussian and binary outcomes.","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83391046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Tutorial in Joint Modeling and Prediction: a Statistical Software for Correlated Longitudinal Outcomes, Recurrent Events and a Terminal Event 联合建模与预测教程:一种相关纵向结果、复发事件和终点事件的统计软件
Pub Date : 2017-01-13 DOI: 10.18637/jss.v081.i03
Agnieszka Kr'ol, A. Mauguen, Yassin Mazroui, Alexandre Laurent, S. Michiels, V. Rondeau
Extensions in the field of joint modeling of correlated data and dynamic predictions improve the development of prognosis research. The R package frailtypack provides estimations of various joint models for longitudinal data and survival events. In particular, it fits models for recurrent events and a terminal event (frailtyPenal), models for two survival outcomes for clustered data (frailtyPenal), models for two types of recurrent events and a terminal event (multivPenal), models for a longitudinal biomarker and a terminal event (longiPenal) and models for a longitudinal biomarker, recurrent events and a terminal event (trivPenal). The estimators are obtained using a standard and penalized maximum likelihood approach, each model function allows to evaluate goodness-of-fit analyses and plots of baseline hazard functions. Finally, the package provides individual dynamic predictions of the terminal event and evaluation of predictive accuracy. This paper presents theoretical models with estimation techniques, applies the methods for predictions and illustrates frailtypack functions details with examples.
相关数据联合建模和动态预测领域的扩展促进了预后研究的发展。R包脆弱包提供了纵向数据和生存事件的各种联合模型的估计。特别是,它适合复发事件和终止事件的模型(frailtyPenal),集群数据的两个生存结果模型(frailtyPenal),两种类型的复发事件和终止事件的模型(multivPenal),纵向生物标志物和终止事件的模型(longiPenal)以及纵向生物标志物,复发事件和终止事件的模型(trivPenal)。使用标准和惩罚最大似然方法获得估计量,每个模型函数允许评估拟合优度分析和基线危险函数的图。最后,该软件包提供了终端事件的个别动态预测和预测精度的评估。本文用估计技术建立了理论模型,应用该方法进行预测,并用实例详细说明了脆弱包函数。
{"title":"Tutorial in Joint Modeling and Prediction: a Statistical Software for Correlated Longitudinal Outcomes, Recurrent Events and a Terminal Event","authors":"Agnieszka Kr'ol, A. Mauguen, Yassin Mazroui, Alexandre Laurent, S. Michiels, V. Rondeau","doi":"10.18637/jss.v081.i03","DOIUrl":"https://doi.org/10.18637/jss.v081.i03","url":null,"abstract":"Extensions in the field of joint modeling of correlated data and dynamic predictions improve the development of prognosis research. The R package frailtypack provides estimations of various joint models for longitudinal data and survival events. In particular, it fits models for recurrent events and a terminal event (frailtyPenal), models for two survival outcomes for clustered data (frailtyPenal), models for two types of recurrent events and a terminal event (multivPenal), models for a longitudinal biomarker and a terminal event (longiPenal) and models for a longitudinal biomarker, recurrent events and a terminal event (trivPenal). The estimators are obtained using a standard and penalized maximum likelihood approach, each model function allows to evaluate goodness-of-fit analyses and plots of baseline hazard functions. Finally, the package provides individual dynamic predictions of the terminal event and evaluation of predictive accuracy. This paper presents theoretical models with estimation techniques, applies the methods for predictions and illustrates frailtypack functions details with examples.","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85044450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Calculating probabilistic excursion sets and related quantities using excursions 使用漂移计算概率漂移集和相关数量
Pub Date : 2016-12-13 DOI: 10.18637/JSS.V086.I05
D. Bolin, F. Lindgren
The R software package excursions contains methods for calculating probabilistic excursion sets, contour credible regions, and simultaneous confidence bands for latent Gaussian stochastic processes and fields. It also contains methods for uncertainty quantification of contour maps and computation of Gaussian integrals. This article describes the theoretical and computational methods used in the package. The main functions of the package are introduced and two examples illustrate how the package can be used.
R软件包包含计算概率漂移集的方法,轮廓可信区域,以及潜在高斯随机过程和场的同时置信带。它还包括等高线图的不确定性量化和高斯积分的计算方法。本文介绍了包中使用的理论和计算方法。介绍了该软件包的主要功能,并举例说明了如何使用该软件包。
{"title":"Calculating probabilistic excursion sets and related quantities using excursions","authors":"D. Bolin, F. Lindgren","doi":"10.18637/JSS.V086.I05","DOIUrl":"https://doi.org/10.18637/JSS.V086.I05","url":null,"abstract":"The R software package excursions contains methods for calculating probabilistic excursion sets, contour credible regions, and simultaneous confidence bands for latent Gaussian stochastic processes and fields. It also contains methods for uncertainty quantification of contour maps and computation of Gaussian integrals. This article describes the theoretical and computational methods used in the package. The main functions of the package are introduced and two examples illustrate how the package can be used.","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81840384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
ADDT : An R Package for Analysis of Accelerated Destructive Degradation Test Data 用于加速破坏性退化试验数据分析的R包
Pub Date : 2016-11-22 DOI: 10.1007/978-981-10-5194-4_14
Zhongnan Jin, Yimeng Xie, Yili Hong, J. V. Mullekom
{"title":"ADDT : An R Package for Analysis of Accelerated Destructive Degradation Test Data","authors":"Zhongnan Jin, Yimeng Xie, Yili Hong, J. V. Mullekom","doi":"10.1007/978-981-10-5194-4_14","DOIUrl":"https://doi.org/10.1007/978-981-10-5194-4_14","url":null,"abstract":"","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"91 1","pages":"267-291"},"PeriodicalIF":0.0,"publicationDate":"2016-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81557823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Local Kernel Dimension Reduction in Approximate Bayesian Computation 近似贝叶斯计算中的局部核维降维
Pub Date : 2016-09-05 DOI: 10.4236/OJS.2018.83031
Jin Zhou, K. Fukumizu
Approximate Bayesian Computation (ABC) is a popular sampling method in applications involving intractable likelihood functions. Without evaluating the likelihood function, ABC approximates the posterior distribution by the set of accepted samples which are simulated with parameters drawn from the prior distribution, where acceptance is determined by the distance between the summary statistics of the sample and the observation. The sufficiency and dimensionality of the summary statistics play a central role in the application of ABC. This paper proposes Local Gradient Kernel Dimension Reduction (LGKDR) to construct low dimensional summary statistics for ABC. The proposed method identifies a sufficient subspace of the original summary statistics by implicitly considers all nonlinear transforms therein, and a weighting kernel is used for the concentration of the projections. No strong assumptions are made on the marginal distributions nor the regression model, permitting usage in a wide range of applications. Experiments are done with both simple rejection ABC and sequential Monte Carlo ABC methods. Results are reported as competitive in the former and substantially better in the latter cases in which Monte Carlo errors are compressed as much as possible.
近似贝叶斯计算(ABC)是一种广泛应用于难处理似然函数的采样方法。ABC在不评估似然函数的情况下,通过接受样本的集合来近似后验分布,接受样本的集合用从先验分布中提取的参数来模拟,其中接受度由样本的汇总统计量与观测值之间的距离决定。摘要统计量的充分性和维度性在ABC的应用中起着核心作用。本文提出了局部梯度核降维(LGKDR)来构造ABC的低维汇总统计量。该方法通过隐式考虑原始汇总统计量的所有非线性变换来识别一个足够的子空间,并使用加权核对投影进行集中。没有对边际分布和回归模型做出强有力的假设,允许在广泛的应用中使用。用简单拒绝ABC法和顺序蒙特卡罗ABC法分别进行了实验。前者的结果具有竞争性,而后者的结果则好得多,因为后者尽可能地压缩了蒙特卡洛误差。
{"title":"Local Kernel Dimension Reduction in Approximate Bayesian Computation","authors":"Jin Zhou, K. Fukumizu","doi":"10.4236/OJS.2018.83031","DOIUrl":"https://doi.org/10.4236/OJS.2018.83031","url":null,"abstract":"Approximate Bayesian Computation (ABC) is a popular sampling method in applications involving intractable likelihood functions. Without evaluating the likelihood function, ABC approximates the posterior distribution by the set of accepted samples which are simulated with parameters drawn from the prior distribution, where acceptance is determined by the distance between the summary statistics of the sample and the observation. The sufficiency and dimensionality of the summary statistics play a central role in the application of ABC. This paper proposes Local Gradient Kernel Dimension Reduction (LGKDR) to construct low dimensional summary statistics for ABC. The proposed method identifies a sufficient subspace of the original summary statistics by implicitly considers all nonlinear transforms therein, and a weighting kernel is used for the concentration of the projections. No strong assumptions are made on the marginal distributions nor the regression model, permitting usage in a wide range of applications. Experiments are done with both simple rejection ABC and sequential Monte Carlo ABC methods. Results are reported as competitive in the former and substantially better in the latter cases in which Monte Carlo errors are compressed as much as possible.","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"306 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77129459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast Simulation of Hyperplane-Truncated Multivariate Normal Distributions 超平面截断多元正态分布的快速模拟
Pub Date : 2016-07-16 DOI: 10.1214/17-BA1052
Yulai Cong, Bo Chen, Mingyuan Zhou
We introduce a fast and easy-to-implement simulation algorithm for a multivariate normal distribution truncated on the intersection of a set of hyperplanes, and further generalize it to efficiently simulate random variables from a multivariate normal distribution whose covariance (precision) matrix can be decomposed as a positive-definite matrix minus (plus) a low-rank symmetric matrix. Example results illustrate the correctness and efficiency of the proposed simulation algorithms.
本文介绍了一种快速且易于实现的多变量正态分布的仿真算法,并将其推广到多变量正态分布的仿真中,该多变量正态分布的协方差(精度)矩阵可以分解为正定矩阵减去(加上)低秩对称矩阵。算例结果验证了所提仿真算法的正确性和有效性。
{"title":"Fast Simulation of Hyperplane-Truncated Multivariate Normal Distributions","authors":"Yulai Cong, Bo Chen, Mingyuan Zhou","doi":"10.1214/17-BA1052","DOIUrl":"https://doi.org/10.1214/17-BA1052","url":null,"abstract":"We introduce a fast and easy-to-implement simulation algorithm for a multivariate normal distribution truncated on the intersection of a set of hyperplanes, and further generalize it to efficiently simulate random variables from a multivariate normal distribution whose covariance (precision) matrix can be decomposed as a positive-definite matrix minus (plus) a low-rank symmetric matrix. Example results illustrate the correctness and efficiency of the proposed simulation algorithms.","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"89 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80329910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Robust and scalable Bayesian analysis of spatial neural tuning function data 空间神经调谐函数数据的鲁棒和可扩展贝叶斯分析
Pub Date : 2016-06-24 DOI: 10.1214/16-AOAS996
Kamiar Rahnama Rad, Timothy A. Machado, L. Paninski
A common analytical problem in neuroscience is the interpretation of neural activity with respect to sensory input or behavioral output. This is typically achieved by regressing measured neural activity against known stimuli or behavioral variables to produce a "tuning function" for each neuron. Unfortunately, because this approach handles neurons individually, it cannot take advantage of simultaneous measurements from spatially adjacent neurons that often have similar tuning properties. On the other hand, sharing information between adjacent neurons can errantly degrade estimates of tuning functions across space if there are sharp discontinuities in tuning between nearby neurons. In this paper, we develop a computationally efficient block Gibbs sampler that effectively pools information between neurons to de-noise tuning function estimates while simultaneously preserving sharp discontinuities that might exist in the organization of tuning across space. This method is fully Bayesian and its computational cost per iteration scales sub-quadratically with total parameter dimensionality. We demonstrate the robustness and scalability of this approach by applying it to both real and synthetic datasets. In particular, an application to data from the spinal cord illustrates that the proposed methods can dramatically decrease the experimental time required to accurately estimate tuning functions.
神经科学中一个常见的分析问题是根据感觉输入或行为输出来解释神经活动。这通常是通过将测量到的神经活动与已知的刺激或行为变量进行回归来实现的,从而为每个神经元产生“调谐函数”。不幸的是,由于这种方法单独处理神经元,它不能利用空间相邻神经元的同时测量,这些神经元通常具有相似的调谐特性。另一方面,如果相邻神经元之间的调谐存在明显的不连续,则相邻神经元之间的信息共享可能会严重降低跨空间调谐函数的估计。在本文中,我们开发了一种计算效率高的块Gibbs采样器,该采样器有效地在神经元之间汇集信息以去噪调谐函数估计,同时保留可能存在于跨空间调谐组织中的尖锐不连续。该方法是完全贝叶斯的,每次迭代的计算成本随总参数维数呈次二次增长。我们通过将此方法应用于真实和合成数据集来证明其鲁棒性和可扩展性。特别是,对脊髓数据的应用表明,所提出的方法可以显着减少准确估计调谐函数所需的实验时间。
{"title":"Robust and scalable Bayesian analysis of spatial neural tuning function data","authors":"Kamiar Rahnama Rad, Timothy A. Machado, L. Paninski","doi":"10.1214/16-AOAS996","DOIUrl":"https://doi.org/10.1214/16-AOAS996","url":null,"abstract":"A common analytical problem in neuroscience is the interpretation of neural activity with respect to sensory input or behavioral output. This is typically achieved by regressing measured neural activity against known stimuli or behavioral variables to produce a \"tuning function\" for each neuron. Unfortunately, because this approach handles neurons individually, it cannot take advantage of simultaneous measurements from spatially adjacent neurons that often have similar tuning properties. On the other hand, sharing information between adjacent neurons can errantly degrade estimates of tuning functions across space if there are sharp discontinuities in tuning between nearby neurons. In this paper, we develop a computationally efficient block Gibbs sampler that effectively pools information between neurons to de-noise tuning function estimates while simultaneously preserving sharp discontinuities that might exist in the organization of tuning across space. This method is fully Bayesian and its computational cost per iteration scales sub-quadratically with total parameter dimensionality. We demonstrate the robustness and scalability of this approach by applying it to both real and synthetic datasets. In particular, an application to data from the spinal cord illustrates that the proposed methods can dramatically decrease the experimental time required to accurately estimate tuning functions.","PeriodicalId":8446,"journal":{"name":"arXiv: Computation","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84636232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
arXiv: Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1