首页 > 最新文献

Neurons, behavior, data analysis and theory最新文献

英文 中文
Mixed-horizon optimal feedback control as a model of human movement 混合视界最优反馈控制作为人体运动模型
Pub Date : 2021-04-13 DOI: 10.51628/001c.29674
Justinas Česonis, D. W. Franklin
Funding information Computational optimal feedback control (OFC) models in the sensorimotor control literature span a vast range of different implementations. Among the popular algorithms, finitehorizon, receding-horizon or infinite-horizon linear-quadratic regulators (LQR) have been broadly used to model human reaching movements. While these different implementations have their unique merits, all three have limitations in simulating the temporal evolution of visuomotor feedback responses. Here we propose a novel approach – a mixed-horizonOFC – by combining the strengths of the traditional finite-horizon and the infinite-horizon controllers to address their individual limitations. Specifically, we use the infinite-horizonOFC to generate durations of themovements, which are then fed into the finite-horizon controller to generate control gains. We then demonstrate the stability of our model by performing extensive sensitivity analysis of both re-optimisation and different cost functions. Finally, we use our model to provide a fresh look to previously published studies by reinforcing the previous results [1], providing alternative explanations to previous studies [2], or generating new predictive results for prior experiments [3].
在感觉运动控制文献中,计算最优反馈控制(OFC)模型涵盖了广泛的不同实现。在流行的算法中,有限视界、后退视界或无限视界线性二次型调节器(LQR)被广泛用于模拟人类的伸展运动。虽然这些不同的实现有其独特的优点,但在模拟视觉运动反馈反应的时间演变方面,这三种实现都有局限性。在这里,我们提出了一种新颖的方法-混合水平ofc -通过结合传统有限水平控制器和无限水平控制器的优势来解决它们各自的局限性。具体来说,我们使用无限水平ofc来生成运动的持续时间,然后将其输入到有限水平控制器中以产生控制增益。然后,我们通过对重新优化和不同成本函数进行广泛的敏感性分析来证明我们模型的稳定性。最后,我们使用我们的模型通过强化先前的结果[1],为先前的研究[2]提供替代解释,或为先前的实验[3]生成新的预测结果,为先前发表的研究提供新的视角。
{"title":"Mixed-horizon optimal feedback control as a model of human movement","authors":"Justinas Česonis, D. W. Franklin","doi":"10.51628/001c.29674","DOIUrl":"https://doi.org/10.51628/001c.29674","url":null,"abstract":"Funding information Computational optimal feedback control (OFC) models in the sensorimotor control literature span a vast range of different implementations. Among the popular algorithms, finitehorizon, receding-horizon or infinite-horizon linear-quadratic regulators (LQR) have been broadly used to model human reaching movements. While these different implementations have their unique merits, all three have limitations in simulating the temporal evolution of visuomotor feedback responses. Here we propose a novel approach – a mixed-horizonOFC – by combining the strengths of the traditional finite-horizon and the infinite-horizon controllers to address their individual limitations. Specifically, we use the infinite-horizonOFC to generate durations of themovements, which are then fed into the finite-horizon controller to generate control gains. We then demonstrate the stability of our model by performing extensive sensitivity analysis of both re-optimisation and different cost functions. Finally, we use our model to provide a fresh look to previously published studies by reinforcing the previous results [1], providing alternative explanations to previous studies [2], or generating new predictive results for prior experiments [3].","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84057651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Deep Recurrent Encoder: an end-to-end network to model magnetoencephalography at scale 深度循环编码器:一个端到端的网络来模拟大规模的脑磁图
Pub Date : 2021-03-03 DOI: 10.51628/001c.38668
O. Chehab, Alexandre Défossez, Jean-Christophe Loiseau, Alexandre Gramfort, J. King
Understanding how the brain responds to sensory inputs from non-invasive brain recordings like magnetoencephalography (MEG) can be particularly challenging: (i) the high-dimensional dynamics of mass neuronal activity are notoriously difficult to model, (ii) signals can greatly vary across subjects and trials and (iii) the relationship between these brain responses and the stimulus features is non-trivial. These challenges have led the community to develop a variety of preprocessing and analytical (almost exclusively linear) methods, each designed to tackle one of these issues. Instead, we propose to address these challenges through a specific end-to-end deep learning architecture, trained to predict the MEG responses of multiple subjects at once. We successfully test this approach on a large cohort of MEG recordings acquired during a one-hour reading task. Our Deep Recurrent Encoder (DRE) reliably predicts MEG responses to words with a three-fold improvement over classic linear methods. We further describe a simple variable importance analysis to investigate the MEG representations learnt by our model and recover the expected evoked responses to word length and word frequency. Last, we show that, contrary to linear encoders, our model captures modulations of the brain response in relation to baseline fluctuations in the alpha frequency band. The quantitative improvement of the present deep learning approach paves the way to a better characterization of the complex dynamics of brain activity from large MEG datasets.
理解大脑对来自非侵入性大脑记录(如脑磁图(MEG))的感觉输入的反应是特别具有挑战性的:(i)大量神经元活动的高维动态是出了名的难以建模的,(ii)不同受试者和试验的信号可能有很大差异,(iii)这些大脑反应与刺激特征之间的关系是非微不足道的。这些挑战促使社区开发了各种预处理和分析(几乎完全是线性的)方法,每种方法都旨在解决其中一个问题。相反,我们建议通过特定的端到端深度学习架构来解决这些挑战,该架构可以同时预测多个受试者的MEG反应。我们在一小时阅读任务中获得的大量脑电信号记录上成功地测试了这种方法。我们的深度循环编码器(DRE)可靠地预测MEG对单词的反应,比经典线性方法提高了三倍。我们进一步描述了一个简单的变量重要性分析,以研究我们的模型学习到的MEG表示,并恢复对单词长度和词频的预期诱发反应。最后,我们表明,与线性编码器相反,我们的模型捕获了与α频段基线波动相关的大脑反应调制。目前深度学习方法的定量改进为从大型MEG数据集更好地表征大脑活动的复杂动态铺平了道路。
{"title":"Deep Recurrent Encoder: an end-to-end network to model magnetoencephalography at scale","authors":"O. Chehab, Alexandre Défossez, Jean-Christophe Loiseau, Alexandre Gramfort, J. King","doi":"10.51628/001c.38668","DOIUrl":"https://doi.org/10.51628/001c.38668","url":null,"abstract":"Understanding how the brain responds to sensory inputs from non-invasive brain recordings like magnetoencephalography (MEG) can be particularly challenging: (i) the high-dimensional dynamics of mass neuronal activity are notoriously difficult to model, (ii) signals can greatly vary across subjects and trials and (iii) the relationship between these brain responses and the stimulus features is non-trivial. These challenges have led the community to develop a variety of preprocessing and analytical (almost exclusively linear) methods, each designed to tackle one of these issues. Instead, we propose to address these challenges through a specific end-to-end deep learning architecture, trained to predict the MEG responses of multiple subjects at once. We successfully test this approach on a large cohort of MEG recordings acquired during a one-hour reading task. Our Deep Recurrent Encoder (DRE) reliably predicts MEG responses to words with a three-fold improvement over classic linear methods. We further describe a simple variable importance analysis to investigate the MEG representations learnt by our model and recover the expected evoked responses to word length and word frequency. Last, we show that, contrary to linear encoders, our model captures modulations of the brain response in relation to baseline fluctuations in the alpha frequency band. The quantitative improvement of the present deep learning approach paves the way to a better characterization of the complex dynamics of brain activity from large MEG datasets.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82532932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Statistical analysis of periodic data in neuroscience 神经科学周期性数据的统计分析
Pub Date : 2021-01-12 DOI: 10.51628/001c.27680
D. Baker
Many experimental paradigms in neuroscience involve driving the nervous system with periodic sensory stimuli. Neural signals recorded using a variety of techniques will then include phase-locked oscillations at the stimulation frequency. The analysis of such data often involves standard univariate statistics such as T-tests, conducted on the Fourier amplitude components (ignoring phase), either to test for the presence of a signal, or to compare signals across different conditions. However, the assumptions of these tests will sometimes be violated because amplitudes are not normally distributed, and furthermore weak signals might be missed if the phase information is discarded. An alternative approach is to conduct multivariate statistical tests using the real and imaginary Fourier components. Here the performance of two multivariate extensions of the T-test are compared: Hotelling's $T^2$ and a variant called $T^2_{circ}$. A novel test of the assumptions of $T^2_{circ}$ is developed, based on the condition index of the data (the square root of the ratio of eigenvalues of a bounding ellipse), and a heuristic for excluding outliers using the Mahalanobis distance is proposed. The $T^2_{circ}$ statistic is then extended to multi-level designs, resulting in a new statistical test termed $ANOVA^2_{circ}$. This has identical assumptions to $T^2_{circ}$, and is shown to be more sensitive than MANOVA when these assumptions are met. The use of these tests is demonstrated for two publicly available empirical data sets, and practical guidance is suggested for choosing which test to run. Implementations of these novel tools are provided as an R package and a Matlab toolbox, in the hope that their wider adoption will improve the sensitivity of statistical inferences involving periodic data.
神经科学中的许多实验范例都涉及到用周期性的感觉刺激来驱动神经系统。使用各种技术记录的神经信号将包括刺激频率下的锁相振荡。对这些数据的分析通常涉及标准的单变量统计,如t检验,对傅里叶振幅分量(忽略相位)进行检验,要么检验信号的存在,要么比较不同条件下的信号。然而,这些测试的假设有时会被违反,因为振幅不是正态分布,而且如果丢弃相位信息,可能会错过弱信号。另一种方法是使用实和虚傅立叶分量进行多元统计检验。这里比较了T检验的两种多元扩展的性能:Hotelling的$T^2$和一个称为$T^2_{circ}$的变体。基于数据的条件指数(边界椭圆特征值之比的平方根),提出了一种新的T^2_{circ}$假设检验方法,并提出了一种利用马氏距离排除异常值的启发式方法。然后将$T^2_{circ}$统计量扩展到多级设计,从而产生一个新的统计检验,称为$ANOVA^2_{circ}$。它具有与$T^2_{circ}$相同的假设,并且在满足这些假设时被证明比方差分析更敏感。对两个公开可用的经验数据集演示了这些测试的使用,并建议了选择运行哪个测试的实用指导。这些新工具的实现以R包和Matlab工具箱的形式提供,希望它们的广泛采用将提高涉及周期性数据的统计推断的灵敏度。
{"title":"Statistical analysis of periodic data in neuroscience","authors":"D. Baker","doi":"10.51628/001c.27680","DOIUrl":"https://doi.org/10.51628/001c.27680","url":null,"abstract":"Many experimental paradigms in neuroscience involve driving the nervous system with periodic sensory stimuli. Neural signals recorded using a variety of techniques will then include phase-locked oscillations at the stimulation frequency. The analysis of such data often involves standard univariate statistics such as T-tests, conducted on the Fourier amplitude components (ignoring phase), either to test for the presence of a signal, or to compare signals across different conditions. However, the assumptions of these tests will sometimes be violated because amplitudes are not normally distributed, and furthermore weak signals might be missed if the phase information is discarded. An alternative approach is to conduct multivariate statistical tests using the real and imaginary Fourier components. Here the performance of two multivariate extensions of the T-test are compared: Hotelling's $T^2$ and a variant called $T^2_{circ}$. A novel test of the assumptions of $T^2_{circ}$ is developed, based on the condition index of the data (the square root of the ratio of eigenvalues of a bounding ellipse), and a heuristic for excluding outliers using the Mahalanobis distance is proposed. The $T^2_{circ}$ statistic is then extended to multi-level designs, resulting in a new statistical test termed $ANOVA^2_{circ}$. This has identical assumptions to $T^2_{circ}$, and is shown to be more sensitive than MANOVA when these assumptions are met. The use of these tests is demonstrated for two publicly available empirical data sets, and practical guidance is suggested for choosing which test to run. Implementations of these novel tools are provided as an R package and a Matlab toolbox, in the hope that their wider adoption will improve the sensitivity of statistical inferences involving periodic data.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81747179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning. 利用反强化学习预测目标导向的注意力控制
Pub Date : 2021-01-01 Epub Date: 2021-04-20 DOI: 10.51628/001c.22322
Gregory J Zelinsky, Yupei Chen, Seoyoung Ahn, Hossein Adeli, Zhibo Yang, Lihan Huang, Dimitrios Samaras, Minh Hoai

Understanding how goals control behavior is a question ripe for interrogation by new methods from machine learning. These methods require large and labeled datasets to train models. To annotate a large-scale image dataset with observed search fixations, we collected 16,184 fixations from people searching for either microwaves or clocks in a dataset of 4,366 images (MS-COCO). We then used this behaviorally-annotated dataset and the machine learning method of inverse-reinforcement learning (IRL) to learn target-specific reward functions and policies for these two target goals. Finally, we used these learned policies to predict the fixations of 60 new behavioral searchers (clock = 30, microwave = 30) in a disjoint test dataset of kitchen scenes depicting both a microwave and a clock (thus controlling for differences in low-level image contrast). We found that the IRL model predicted behavioral search efficiency and fixation-density maps using multiple metrics. Moreover, reward maps from the IRL model revealed target-specific patterns that suggest, not just attention guidance by target features, but also guidance by scene context (e.g., fixations along walls in the search of clocks). Using machine learning and the psychologically meaningful principle of reward, it is possible to learn the visual features used in goal-directed attention control.

了解目标是如何控制行为的,是机器学习新方法的一个成熟问题。这些方法需要大量的标注数据集来训练模型。为了用观察到的搜索定点来注释大规模图像数据集,我们在一个包含 4366 张图像的数据集(MS-COCO)中收集了 16184 次人们搜索微波炉或时钟的定点。然后,我们利用这个经过行为注释的数据集和反强化学习(IRL)的机器学习方法,为这两个目标学习特定目标的奖励函数和策略。最后,我们使用这些学习到的策略来预测 60 名新行为搜索者(时钟 = 30,微波炉 = 30)在微波炉和时钟的厨房场景(从而控制低级图像对比度的差异)中的固定行为。我们发现,IRL 模型通过多种指标预测了行为搜索效率和固定密度图。此外,IRL 模型的奖励图揭示了特定目标的模式,表明注意力不仅受目标特征的引导,还受场景背景的引导(例如,在搜索时钟时沿着墙壁的定点)。利用机器学习和心理学上有意义的奖励原则,可以学习目标引导的注意力控制中使用的视觉特征。
{"title":"Predicting Goal-directed Attention Control Using Inverse-Reinforcement Learning.","authors":"Gregory J Zelinsky, Yupei Chen, Seoyoung Ahn, Hossein Adeli, Zhibo Yang, Lihan Huang, Dimitrios Samaras, Minh Hoai","doi":"10.51628/001c.22322","DOIUrl":"10.51628/001c.22322","url":null,"abstract":"<p><p>Understanding how goals control behavior is a question ripe for interrogation by new methods from machine learning. These methods require large and labeled datasets to train models. To annotate a large-scale image dataset with observed search fixations, we collected 16,184 fixations from people searching for either microwaves or clocks in a dataset of 4,366 images (MS-COCO). We then used this behaviorally-annotated dataset and the machine learning method of inverse-reinforcement learning (IRL) to learn target-specific reward functions and policies for these two target goals. Finally, we used these learned policies to predict the fixations of 60 new behavioral searchers (clock = 30, microwave = 30) in a disjoint test dataset of kitchen scenes depicting both a microwave and a clock (thus controlling for differences in low-level image contrast). We found that the IRL model predicted behavioral search efficiency and fixation-density maps using multiple metrics. Moreover, reward maps from the IRL model revealed target-specific patterns that suggest, not just attention guidance by target features, but also guidance by scene context (e.g., fixations along walls in the search of clocks). Using machine learning and the psychologically meaningful principle of reward, it is possible to learn the visual features used in goal-directed attention control.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8218820/pdf/nihms-1715365.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39101639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strong and weak principles of neural dimension reduction 神经降维的强弱原则
Pub Date : 2020-11-16 DOI: 10.51628/001c.24619
M. Humphries
If spikes are the medium, what is the message? Answering that question is driving the development of large-scale, single neuron resolution recordings from behaving animals, on the scale of thousands of neurons. But these data are inherently high-dimensional, with as many dimensions as neurons - so how do we make sense of them? For many the answer is to reduce the number of dimensions. Here I argue we can distinguish weak and strong principles of neural dimension reduction. The weak principle is that dimension reduction is a convenient tool for making sense of complex neural data. The strong principle is that dimension reduction shows us how neural circuits actually operate and compute. Elucidating these principles is crucial, for which we subscribe to provides radically different interpretations of the same neural activity data. I show how we could make either the weak or strong principles appear to be true based on innocuous looking decisions about how we use dimension reduction on our data. To counteract these confounds, I outline the experimental evidence for the strong principle that do not come from dimension reduction; but also show there are a number of neural phenomena that the strong principle fails to address. To reconcile these conflicting data, I suggest that the brain has both principles at play.
如果尖峰是媒介,那么传递的信息是什么?这个问题的答案正在推动大规模、单神经元分辨率记录的发展,这些记录来自数千个神经元的行为动物。但这些数据本质上是高维的,其维度与神经元一样多——那么我们如何理解它们呢?对于许多人来说,答案是减少维数。在这里,我认为我们可以区分神经降维的弱原则和强原则。弱原理是,降维是理解复杂神经数据的方便工具。重要的原理是,降维向我们展示了神经回路实际上是如何运作和计算的。阐明这些原理是至关重要的,因为我们同意对相同的神经活动数据提供完全不同的解释。我展示了我们如何使弱原则或强原则看起来是正确的,基于我们如何在数据上使用降维的无害决定。为了消除这些困惑,我概述了不是来自降维的强原理的实验证据;但也表明有许多神经现象是强原理无法解决的。为了调和这些相互矛盾的数据,我认为大脑有两个原则在起作用。
{"title":"Strong and weak principles of neural dimension reduction","authors":"M. Humphries","doi":"10.51628/001c.24619","DOIUrl":"https://doi.org/10.51628/001c.24619","url":null,"abstract":"If spikes are the medium, what is the message? Answering that question is driving the development of large-scale, single neuron resolution recordings from behaving animals, on the scale of thousands of neurons. But these data are inherently high-dimensional, with as many dimensions as neurons - so how do we make sense of them? For many the answer is to reduce the number of dimensions. Here I argue we can distinguish weak and strong principles of neural dimension reduction. The weak principle is that dimension reduction is a convenient tool for making sense of complex neural data. The strong principle is that dimension reduction shows us how neural circuits actually operate and compute. Elucidating these principles is crucial, for which we subscribe to provides radically different interpretations of the same neural activity data. I show how we could make either the weak or strong principles appear to be true based on innocuous looking decisions about how we use dimension reduction on our data. To counteract these confounds, I outline the experimental evidence for the strong principle that do not come from dimension reduction; but also show there are a number of neural phenomena that the strong principle fails to address. To reconcile these conflicting data, I suggest that the brain has both principles at play.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73754293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Comparing representational geometries using whitened unbiased-distance-matrix similarity 使用白化无偏距离矩阵相似性比较代表性几何
Pub Date : 2020-07-06 DOI: 10.51628/001c.27664
J. Diedrichsen, Eva Berlot, Marieke Mur, Heiko H. Schütt, Mahdiyar Shahbazi, N. Kriegeskorte
Representational similarity analysis (RSA) tests models of brain computation by investigating how neural activity patterns reflect experimental conditions. Instead of predicting activity patterns directly, the models predict the geometry of the representation, as defined by the representational dissimilarity matrix (RDM), which captures to what extent experimental conditions are associated with similar or dissimilar activity patterns. RSA therefore first quantifies the representational geometry by calculating a dissimilarity measure for each pair of conditions, and then compares the estimated representational dissimilarities to those predicted by each model. Here we address two central challenges of RSA: First, dissimilarity measures such as the Euclidean, Mahalanobis, and correlation distance, are biased by measurement noise, which can lead to incorrect inferences. Unbiased dissimilarity estimates can be obtained by crossvalidation, at the price of increased variance. Second, the pairwise dissimilarity estimates are not statistically independent, and ignoring this dependency makes model comparison statistically suboptimal. We present an analytical expression for the mean and (co)variance of both biased and unbiased estimators of the squared Euclidean and Mahalanobis distance, allowing us to quantify the bias-variance trade-off. We also use the analytical expression of the covariance of the dissimilarity estimates to whiten the RDM estimation errors. This results in a new criterion for RDM similarity, the whitened unbiased RDM cosine similarity (WUC), which allows for near-optimal model selection combined with robustness to correlated measurement noise.
表征相似性分析(RSA)通过研究神经活动模式如何反映实验条件来测试大脑计算模型。这些模型不是直接预测活动模式,而是预测表征的几何形状,由表征不相似矩阵(RDM)定义,它捕捉到实验条件与相似或不同活动模式的关联程度。因此,RSA首先通过计算每对条件的不相似性度量来量化表征几何,然后将估计的表征不相似性与每个模型预测的不相似性进行比较。在这里,我们解决RSA的两个主要挑战:首先,不相似性度量,如欧几里得、马氏比和相关距离,会受到测量噪声的影响,从而导致不正确的推断。以增加方差为代价,可以通过交叉验证获得无偏不相似估计。其次,两两不相似估计在统计上不是独立的,忽略这种依赖性会使模型比较在统计上不是最优的。我们提出了欧几里得距离和马氏距离平方的有偏和无偏估计量的均值和(co)方差的解析表达式,使我们能够量化偏方差权衡。我们还使用了不相似估计的协方差的解析表达式来白化RDM估计误差。这就产生了一种新的RDM相似度标准,即白化无偏RDM余弦相似度(WUC),它允许近乎最优的模型选择,并结合对相关测量噪声的鲁棒性。
{"title":"Comparing representational geometries using whitened unbiased-distance-matrix similarity","authors":"J. Diedrichsen, Eva Berlot, Marieke Mur, Heiko H. Schütt, Mahdiyar Shahbazi, N. Kriegeskorte","doi":"10.51628/001c.27664","DOIUrl":"https://doi.org/10.51628/001c.27664","url":null,"abstract":"Representational similarity analysis (RSA) tests models of brain computation by investigating how neural activity patterns reflect experimental conditions. Instead of predicting activity patterns directly, the models predict the geometry of the representation, as defined by the representational dissimilarity matrix (RDM), which captures to what extent experimental conditions are associated with similar or dissimilar activity patterns. RSA therefore first quantifies the representational geometry by calculating a dissimilarity measure for each pair of conditions, and then compares the estimated representational dissimilarities to those predicted by each model. Here we address two central challenges of RSA: First, dissimilarity measures such as the Euclidean, Mahalanobis, and correlation distance, are biased by measurement noise, which can lead to incorrect inferences. Unbiased dissimilarity estimates can be obtained by crossvalidation, at the price of increased variance. Second, the pairwise dissimilarity estimates are not statistically independent, and ignoring this dependency makes model comparison statistically suboptimal. We present an analytical expression for the mean and (co)variance of both biased and unbiased estimators of the squared Euclidean and Mahalanobis distance, allowing us to quantify the bias-variance trade-off. We also use the analytical expression of the covariance of the dissimilarity estimates to whiten the RDM estimation errors. This results in a new criterion for RDM similarity, the whitened unbiased RDM cosine similarity (WUC), which allows for near-optimal model selection combined with robustness to correlated measurement noise.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78117809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Overcoming the Weight Transport Problem via Spike-Timing-Dependent Weight Inference 利用峰值时间相关权重推断克服权重传输问题
Pub Date : 2020-03-09 DOI: 10.51628/001c.27423
Nasir Ahmad, L. Ambrogioni, M. Gerven
We propose a solution to the weight transport problem, which questions the biological plausibility of the backpropagation algorithm. We derive our method based upon a theoretical analysis of the (approximate) dynamics of leaky integrate-and-fire neurons. We show that the use of spike timing alone outcompetes existing biologically plausible methods for synaptic weight inference in spiking neural network models. Furthermore, our proposed method is more flexible, being applicable to any spiking neuron model, is conservative in how many parameters are required for implementation and can be deployed in an online-fashion with minimal computational overhead. These features, together with its biological plausibility, make it an attractive mechanism underlying weight inference at single synapses.
我们提出了一个解决重量传输问题的方法,该问题质疑反向传播算法的生物学合理性。我们推导出我们的方法基于理论分析(近似)动力学的漏整合和火神经元。我们表明,在尖峰神经网络模型中,单独使用尖峰时序胜过现有的生物学上合理的突触权重推断方法。此外,我们提出的方法更加灵活,适用于任何尖峰神经元模型,在实现所需参数的数量上是保守的,并且可以以最小的计算开销以在线方式部署。这些特征,加上其生物学上的合理性,使其成为单突触权重推断的一个有吸引力的机制。
{"title":"Overcoming the Weight Transport Problem via Spike-Timing-Dependent Weight Inference","authors":"Nasir Ahmad, L. Ambrogioni, M. Gerven","doi":"10.51628/001c.27423","DOIUrl":"https://doi.org/10.51628/001c.27423","url":null,"abstract":"We propose a solution to the weight transport problem, which questions the biological plausibility of the backpropagation algorithm. We derive our method based upon a theoretical analysis of the (approximate) dynamics of leaky integrate-and-fire neurons. We show that the use of spike timing alone outcompetes existing biologically plausible methods for synaptic weight inference in spiking neural network models. Furthermore, our proposed method is more flexible, being applicable to any spiking neuron model, is conservative in how many parameters are required for implementation and can be deployed in an online-fashion with minimal computational overhead. These features, together with its biological plausibility, make it an attractive mechanism underlying weight inference at single synapses.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91322224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of the hierarchical bootstrap to multi-level data in neuroscience. 层次自举法在神经科学多层次数据中的应用。
Pub Date : 2020-01-01 Epub Date: 2020-07-21
Varun Saravanan, Gordon J Berman, Samuel J Sober

A common feature in many neuroscience datasets is the presence of hierarchical data structures, most commonly recording the activity of multiple neurons in multiple animals across multiple trials. Accordingly, the measurements constituting the dataset are not independent, even though the traditional statistical analyses often applied in such cases (e.g., Student's t-test) treat them as such. The hierarchical bootstrap has been shown to be an effective tool to accurately analyze such data and while it has been used extensively in the statistical literature, its use is not widespread in neuroscience - despite the ubiquity of hierarchical datasets. In this paper, we illustrate the intuitiveness and utility of this approach to analyze hierarchically nested datasets. We use simulated neural data to show that traditional statistical tests can result in a false positive rate of over 45%, even if the Type-I error rate is set at 5%. While summarizing data across non-independent points (or lower levels) can potentially fix this problem, this approach greatly reduces the statistical power of the analysis. The hierarchical bootstrap, when applied sequentially over the levels of the hierarchical structure, keeps the Type-I error rate within the intended bound and retains more statistical power than summarizing methods. We conclude by demonstrating the effectiveness of the method in two real-world examples, first analyzing singing data in male Bengalese finches (Lonchura striata var. domestica) and second quantifying changes in behavior under optogenetic control in flies (Drosophila melanogaster).

许多神经科学数据集的一个共同特征是分层数据结构的存在,最常见的是记录多个动物在多个试验中的多个神经元的活动。因此,构成数据集的测量并不是独立的,即使传统的统计分析经常应用于这种情况下(例如,学生t检验)将它们视为独立的。分层自举已被证明是准确分析此类数据的有效工具,虽然它已在统计文献中广泛使用,但它在神经科学中的使用并不广泛-尽管分层数据集无处不在。在本文中,我们说明了这种方法在分析分层嵌套数据集时的直观性和实用性。我们使用模拟神经数据表明,即使将i型错误率设置为5%,传统的统计测试也可能导致超过45%的假阳性率。虽然跨非独立点(或较低级别)汇总数据可能会解决这个问题,但这种方法大大降低了分析的统计能力。当分层引导在分层结构的各个层次上依次应用时,可以将Type-I错误率保持在预期的范围内,并且比汇总方法保留更多的统计能力。最后,我们通过两个现实世界的例子证明了该方法的有效性,首先分析了雄性孟加拉雀(Lonchura striata var. domestica)的鸣叫数据,其次量化了光遗传控制下果蝇(Drosophila melanogaster)的行为变化。
{"title":"Application of the hierarchical bootstrap to multi-level data in neuroscience.","authors":"Varun Saravanan,&nbsp;Gordon J Berman,&nbsp;Samuel J Sober","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>A common feature in many neuroscience datasets is the presence of hierarchical data structures, most commonly recording the activity of multiple neurons in multiple animals across multiple trials. Accordingly, the measurements constituting the dataset are not independent, even though the traditional statistical analyses often applied in such cases (e.g., Student's t-test) treat them as such. The hierarchical bootstrap has been shown to be an effective tool to accurately analyze such data and while it has been used extensively in the statistical literature, its use is not widespread in neuroscience - despite the ubiquity of hierarchical datasets. In this paper, we illustrate the intuitiveness and utility of this approach to analyze hierarchically nested datasets. We use simulated neural data to show that traditional statistical tests can result in a false positive rate of over 45%, even if the Type-I error rate is set at 5%. While summarizing data across non-independent points (or lower levels) can potentially fix this problem, this approach greatly reduces the statistical power of the analysis. The hierarchical bootstrap, when applied sequentially over the levels of the hierarchical structure, keeps the Type-I error rate within the intended bound and retains more statistical power than summarizing methods. We conclude by demonstrating the effectiveness of the method in two real-world examples, first analyzing singing data in male Bengalese finches (<i>Lonchura striata</i> var. <i>domestica</i>) and second quantifying changes in behavior under optogenetic control in flies (<i>Drosophila melanogaster</i>).</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7906290/pdf/nihms-1630846.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25414976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensitivity and specificity of a Bayesian single trial analysis for time varying neural signals. 时变神经信号贝叶斯单试验分析的敏感性和特异性。
Jeff T Mohl, Valeria C Caruso, Surya T Tokdar, Jennifer M Groh

We recently reported the existence of fluctuations in neural signals that may permit neurons to code multiple simultaneous stimuli sequentially across time [1]. This required deploying a novel statistical approach to permit investigation of neural activity at the scale of individual trials. Here we present tests using synthetic data to assess the sensitivity and specificity of this analysis. We fabricated datasets to match each of several potential response patterns derived from single-stimulus response distributions. In particular, we simulated dual stimulus trial spike counts that reflected fluctuating mixtures of the single stimulus spike counts, stable intermediate averages, single stimulus winner-take-all, or response distributions that were outside the range defined by the single stimulus responses (such as summation or suppression). We then assessed how well the analysis recovered the correct response pattern as a function of the number of simulated trials and the difference between the simulated responses to each "stimulus" alone. We found excellent recovery of the mixture, intermediate, and outside categories (>97% correct), and good recovery of the single/winner-take-all category (>90% correct) when the number of trials was >20 and the single-stimulus response rates were 50Hz and 20Hz respectively. Both larger numbers of trials and greater separation between the single stimulus firing rates improved categorization accuracy. These results provide a benchmark, and guidelines for data collection, for use of this method to investigate coding of multiple items at the individual-trial time scale.

我们最近报道了神经信号波动的存在,这可能允许神经元对多个同时发生的刺激按时间顺序进行编码[1]。这需要采用一种新颖的统计方法,以便在个体试验的规模上对神经活动进行调查。在这里,我们提出使用合成数据的测试来评估这种分析的敏感性和特异性。我们制作了数据集来匹配从单刺激反应分布中得到的几种潜在反应模式。特别是,我们模拟了双刺激试验尖峰计数,反映了单刺激尖峰计数、稳定的中间平均值、单刺激赢家通吃或单刺激反应定义范围之外的反应分布(如求和或抑制)的波动混合。然后,我们评估了分析如何恢复正确的反应模式,作为模拟试验次数的函数,以及对每个“刺激”单独的模拟反应之间的差异。我们发现,当试验次数>20次,单刺激反应率分别为50Hz和20Hz时,混合、中间和外部类别的回收率很好(正确率>97%),单一/赢者通吃类别的回收率很好(正确率>90%)。更大的试验次数和更大的单刺激放电率之间的分离都提高了分类的准确性。这些结果为使用该方法在个体试验时间尺度上研究多个项目的编码提供了基准和数据收集指南。
{"title":"Sensitivity and specificity of a Bayesian single trial analysis for time varying neural signals.","authors":"Jeff T Mohl,&nbsp;Valeria C Caruso,&nbsp;Surya T Tokdar,&nbsp;Jennifer M Groh","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We recently reported the existence of fluctuations in neural signals that may permit neurons to code multiple simultaneous stimuli sequentially across time [1]. This required deploying a novel statistical approach to permit investigation of neural activity at the scale of individual trials. Here we present tests using synthetic data to assess the sensitivity and specificity of this analysis. We fabricated datasets to match each of several potential response patterns derived from single-stimulus response distributions. In particular, we simulated dual stimulus trial spike counts that reflected fluctuating mixtures of the single stimulus spike counts, stable intermediate averages, single stimulus winner-take-all, or response distributions that were outside the range defined by the single stimulus responses (such as summation or suppression). We then assessed how well the analysis recovered the correct response pattern as a function of the number of simulated trials and the difference between the simulated responses to each \"stimulus\" alone. We found excellent recovery of the mixture, intermediate, and outside categories (>97% correct), and good recovery of the single/winner-take-all category (>90% correct) when the number of trials was >20 and the single-stimulus response rates were 50Hz and 20Hz respectively. Both larger numbers of trials and greater separation between the single stimulus firing rates improved categorization accuracy. These results provide a benchmark, and guidelines for data collection, for use of this method to investigate coding of multiple items at the individual-trial time scale.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8425354/pdf/nihms-1702888.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10506788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel scalable simulations of biological neural networks using TensorFlow: A beginner’s guide 使用TensorFlow的生物神经网络并行可扩展模拟:初学者指南
Pub Date : 2019-06-10 DOI: 10.51628/001c.37893
Rishika Mohanta, Collins G. Assisi
Biological neural networks are often modeled as systems of coupled, nonlinear, ordinary or partial differential equations. The number of differential equations used to model a network increases with the size of the network and the level of detail used to model individual neurons and synapses. As one scales up the size of the simulation, it becomes essential to utilize powerful computing platforms. While many tools exist that solve these equations numerically, they are often platform-specific. Further, there is a high barrier of entry to developing flexible platform-independent general-purpose code that supports hardware acceleration on modern computing architectures such as GPUs/TPUs and Distributed Platforms. TensorFlow is a Python-based open-source package designed for machine learning algorithms. However, it is also a scalable environment for a variety of computations, including solving differential equations using iterative algorithms such as Runge-Kutta methods. In this article and the accompanying tutorials, we present a simple exposition of numerical methods to solve ordinary differential equations using Python and TensorFlow. The tutorials consist of a series of Python notebooks that, over the course of five sessions, will lead novice programmers from writing programs to integrate simple one-dimensional ordinary differential equations using Python to solving a large system (1000’s of differential equations) of coupled conductance-based neurons using a highly parallelized and scalable framework. Embedded with the tutorial is a physiologically realistic implementation of a network in the insect olfactory system. This system, consisting of multiple neuron and synapse types, can serve as a template to simulate other networks.
生物神经网络通常被建模为耦合的、非线性的、常微分或偏微分方程的系统。用于网络建模的微分方程的数量随着网络的大小和用于模拟单个神经元和突触的详细程度的增加而增加。随着模拟规模的扩大,利用强大的计算平台变得至关重要。虽然有许多工具可以用数值方法求解这些方程,但它们通常是特定于平台的。此外,开发支持gpu / tpu和分布式平台等现代计算架构上的硬件加速的灵活的、独立于平台的通用代码的门槛很高。TensorFlow是一个基于python的开源包,专为机器学习算法设计。然而,它也是一个可扩展的环境,用于各种计算,包括使用迭代算法(如龙格-库塔方法)求解微分方程。在本文和随附的教程中,我们简单介绍了使用Python和TensorFlow求解常微分方程的数值方法。本教程由一系列Python笔记本组成,在五个课程的过程中,将引导新手程序员从编写程序到使用Python集成简单的一维常微分方程,到使用高度并行和可扩展的框架求解基于电导的耦合神经元的大型系统(1000个微分方程)。嵌入式教程是昆虫嗅觉系统网络的生理现实实现。该系统由多种神经元和突触类型组成,可以作为模拟其他网络的模板。
{"title":"Parallel scalable simulations of biological neural networks using TensorFlow: A beginner’s guide","authors":"Rishika Mohanta, Collins G. Assisi","doi":"10.51628/001c.37893","DOIUrl":"https://doi.org/10.51628/001c.37893","url":null,"abstract":"Biological neural networks are often modeled as systems of coupled, nonlinear, ordinary or partial differential equations. The number of differential equations used to model a network increases with the size of the network and the level of detail used to model individual neurons and synapses. As one scales up the size of the simulation, it becomes essential to utilize powerful computing platforms. While many tools exist that solve these equations numerically, they are often platform-specific. Further, there is a high barrier of entry to developing flexible platform-independent general-purpose code that supports hardware acceleration on modern computing architectures such as GPUs/TPUs and Distributed Platforms. TensorFlow is a Python-based open-source package designed for machine learning algorithms. However, it is also a scalable environment for a variety of computations, including solving differential equations using iterative algorithms such as Runge-Kutta methods. In this article and the accompanying tutorials, we present a simple exposition of numerical methods to solve ordinary differential equations using Python and TensorFlow. The tutorials consist of a series of Python notebooks that, over the course of five sessions, will lead novice programmers from writing programs to integrate simple one-dimensional ordinary differential equations using Python to solving a large system (1000’s of differential equations) of coupled conductance-based neurons using a highly parallelized and scalable framework. Embedded with the tutorial is a physiologically realistic implementation of a network in the insect olfactory system. This system, consisting of multiple neuron and synapse types, can serve as a template to simulate other networks.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88060526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Neurons, behavior, data analysis and theory
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1