Time-Dependent VAE for Building Latent Factor from Visual Neural Activity with Complex Dynamics

Liwei Huang, ZhengYu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian
{"title":"Time-Dependent VAE for Building Latent Factor from Visual Neural Activity with Complex Dynamics","authors":"Liwei Huang, ZhengYu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian","doi":"arxiv-2408.07908","DOIUrl":null,"url":null,"abstract":"Seeking high-quality neural latent representations to reveal the intrinsic\ncorrelation between neural activity and behavior or sensory stimulation has\nattracted much interest. Currently, some deep latent variable models rely on\nbehavioral information (e.g., movement direction and position) as an aid to\nbuild expressive embeddings while being restricted by fixed time scales. Visual\nneural activity from passive viewing lacks clearly correlated behavior or task\ninformation, and high-dimensional visual stimulation leads to intricate neural\ndynamics. To cope with such conditions, we propose Time-Dependent SwapVAE,\nfollowing the approach of separating content and style spaces in Swap-VAE, on\nthe basis of which we introduce state variables to construct conditional\ndistributions with temporal dependence for the above two spaces. Our model\nprogressively generates latent variables along neural activity sequences, and\nwe apply self-supervised contrastive learning to shape its latent space. In\nthis way, it can effectively analyze complex neural dynamics from sequences of\narbitrary length, even without task or behavioral data as auxiliary inputs. We\ncompare TiDe-SwapVAE with alternative models on synthetic data and neural data\nfrom mouse visual cortex. The results show that our model not only accurately\ndecodes complex visual stimuli but also extracts explicit temporal neural\ndynamics, demonstrating that it builds latent representations more relevant to\nvisual stimulation.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.07908","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Seeking high-quality neural latent representations to reveal the intrinsic correlation between neural activity and behavior or sensory stimulation has attracted much interest. Currently, some deep latent variable models rely on behavioral information (e.g., movement direction and position) as an aid to build expressive embeddings while being restricted by fixed time scales. Visual neural activity from passive viewing lacks clearly correlated behavior or task information, and high-dimensional visual stimulation leads to intricate neural dynamics. To cope with such conditions, we propose Time-Dependent SwapVAE, following the approach of separating content and style spaces in Swap-VAE, on the basis of which we introduce state variables to construct conditional distributions with temporal dependence for the above two spaces. Our model progressively generates latent variables along neural activity sequences, and we apply self-supervised contrastive learning to shape its latent space. In this way, it can effectively analyze complex neural dynamics from sequences of arbitrary length, even without task or behavioral data as auxiliary inputs. We compare TiDe-SwapVAE with alternative models on synthetic data and neural data from mouse visual cortex. The results show that our model not only accurately decodes complex visual stimuli but also extracts explicit temporal neural dynamics, demonstrating that it builds latent representations more relevant to visual stimulation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从具有复杂动态的视觉神经活动中构建潜在因子的时间依赖性 VAE
寻求高质量的神经潜表征来揭示神经活动与行为或感官刺激之间的内在相关性引起了广泛关注。目前,一些深度潜变量模型依赖于行为信息(如运动方向和位置)来辅助建立表现性嵌入,但受到固定时间尺度的限制。被动观看的视觉神经活动缺乏明确相关的行为或任务信息,而高维视觉刺激会导致错综复杂的神经动力学。为了应对这种情况,我们提出了时间依赖性 SwapVAE,沿用 Swap-VAE 中分离内容空间和风格空间的方法,在此基础上引入状态变量,为上述两个空间构建具有时间依赖性的条件分布。我们的模型沿着神经活动序列逐步生成潜变量,并应用自监督对比学习来塑造其潜空间。这样,即使没有任务或行为数据作为辅助输入,它也能从任意长度的序列中有效分析复杂的神经动态。我们将 TiDe-SwapVAE 与其他模型在合成数据和小鼠视觉皮层神经数据上进行了比较。结果表明,我们的模型不仅能准确解码复杂的视觉刺激,还能提取明确的时间神经动力学,证明它能建立与视觉刺激更相关的潜在表征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Early reduced dopaminergic tone mediated by D3 receptor and dopamine transporter in absence epileptogenesis Contrasformer: A Brain Network Contrastive Transformer for Neurodegenerative Condition Identification Identifying Influential nodes in Brain Networks via Self-Supervised Graph-Transformer Contrastive Learning in Memristor-based Neuromorphic Systems Self-Attention Limits Working Memory Capacity of Transformer-Based Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1