A sticky Poisson Hidden Markov Model for spike data

bioRxiv Pub Date : 2024-08-08 DOI:10.1101/2024.08.07.606969
Tianshu Li, Giancarlo La Camera
{"title":"A sticky Poisson Hidden Markov Model for spike data","authors":"Tianshu Li, Giancarlo La Camera","doi":"10.1101/2024.08.07.606969","DOIUrl":null,"url":null,"abstract":"Fitting a hidden Markov Model (HMM) to neural data is a powerful method to segment a spatiotemporal stream of neural activity into sequences of discrete hidden states. Application of HMM has allowed to uncover hidden states and signatures of neural dynamics that seem relevant for sensory and cognitive processes. This has been accomplished especially in datasets comprising ensembles of simultaneously recorded cortical spike trains. However, the HMM analysis of spike data is involved and requires a careful handling of model selection. Two main issues are: (i) the cross-validated likelihood function typically increases with the number of hidden states; (ii) decoding the data with an HMM can lead to very rapid state switching due to fast oscillations in state probabilities. The first problem is related to the phenomenon of over-segmentation and leads to overfitting. The second problem is at odds with the empirical fact that hidden states in cortex tend to last from hundred of milliseconds to seconds. Here, we show that we can alleviate both problems by regularizing a Poisson-HMM during training so as to enforce large self-transition probabilities. We call this algorithm the ‘sticky Poisson-HMM’ (sPHMM). When used to-gether with the Bayesian Information Criterion for model selection, the sPHMM successfully eliminates rapid state switching, outperforming an alternative strategy based on an HMM with a large prior on the self-transition probabilities. The sPHMM also captures the ground truth in surrogate datasets built to resemble the statistical properties of the experimental data.","PeriodicalId":505198,"journal":{"name":"bioRxiv","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"bioRxiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.08.07.606969","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Fitting a hidden Markov Model (HMM) to neural data is a powerful method to segment a spatiotemporal stream of neural activity into sequences of discrete hidden states. Application of HMM has allowed to uncover hidden states and signatures of neural dynamics that seem relevant for sensory and cognitive processes. This has been accomplished especially in datasets comprising ensembles of simultaneously recorded cortical spike trains. However, the HMM analysis of spike data is involved and requires a careful handling of model selection. Two main issues are: (i) the cross-validated likelihood function typically increases with the number of hidden states; (ii) decoding the data with an HMM can lead to very rapid state switching due to fast oscillations in state probabilities. The first problem is related to the phenomenon of over-segmentation and leads to overfitting. The second problem is at odds with the empirical fact that hidden states in cortex tend to last from hundred of milliseconds to seconds. Here, we show that we can alleviate both problems by regularizing a Poisson-HMM during training so as to enforce large self-transition probabilities. We call this algorithm the ‘sticky Poisson-HMM’ (sPHMM). When used to-gether with the Bayesian Information Criterion for model selection, the sPHMM successfully eliminates rapid state switching, outperforming an alternative strategy based on an HMM with a large prior on the self-transition probabilities. The sPHMM also captures the ground truth in surrogate datasets built to resemble the statistical properties of the experimental data.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于尖峰数据的粘性泊松隐马尔可夫模型
将隐马尔可夫模型(HMM)拟合到神经数据是一种强大的方法,可将神经活动的时空流分割为离散的隐状态序列。应用隐马尔可夫模型可以发现与感觉和认知过程相关的神经动态的隐藏状态和特征。特别是在由同时记录的大脑皮层尖峰列车集合组成的数据集中,这一目标已经实现。然而,对尖峰数据进行 HMM 分析是一项复杂的工作,需要对模型选择进行仔细处理。两个主要问题是(i) 交叉验证似然函数通常会随着隐藏状态数量的增加而增加;(ii) 由于状态概率的快速振荡,使用 HMM 解码数据可能会导致非常快速的状态切换。第一个问题与过度分割现象有关,会导致过度拟合。第二个问题与经验事实不符,皮层中的隐藏状态往往持续数百毫秒到数秒不等。在这里,我们展示了可以通过在训练过程中对泊松-HMM 进行正则化来缓解这两个问题,从而执行较大的自过渡概率。我们称这种算法为 "粘性泊松-HMM"(sPHMM)。当与贝叶斯信息准则(Bayesian Information Criterion)一起用于模型选择时,sPHMM 成功地消除了快速状态切换,优于基于具有较大自过渡概率先验的 HMM 的替代策略。sPHMM 还能捕捉到代用数据集中的基本事实,这些数据集的建立与实验数据的统计特性相似。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Stability of cross-sensory input to primary somatosensory cortex across experience Genomic re-sequencing reveals mutational divergence across genetically engineered strains of model archaea A principled approach to community detection in interareal cortical networks A minimal mathematical model for polarity establishment and centralsplindlin-independent cytokinesis PTEN neddylation aggravates CDK4/6 inhibitor resistance in breast cancer
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1