{"title":"从具有复杂动态的视觉神经活动中构建潜在因子的时间依赖性 VAE","authors":"Liwei Huang, ZhengYu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian","doi":"arxiv-2408.07908","DOIUrl":null,"url":null,"abstract":"Seeking high-quality neural latent representations to reveal the intrinsic\ncorrelation between neural activity and behavior or sensory stimulation has\nattracted much interest. Currently, some deep latent variable models rely on\nbehavioral information (e.g., movement direction and position) as an aid to\nbuild expressive embeddings while being restricted by fixed time scales. Visual\nneural activity from passive viewing lacks clearly correlated behavior or task\ninformation, and high-dimensional visual stimulation leads to intricate neural\ndynamics. To cope with such conditions, we propose Time-Dependent SwapVAE,\nfollowing the approach of separating content and style spaces in Swap-VAE, on\nthe basis of which we introduce state variables to construct conditional\ndistributions with temporal dependence for the above two spaces. Our model\nprogressively generates latent variables along neural activity sequences, and\nwe apply self-supervised contrastive learning to shape its latent space. In\nthis way, it can effectively analyze complex neural dynamics from sequences of\narbitrary length, even without task or behavioral data as auxiliary inputs. We\ncompare TiDe-SwapVAE with alternative models on synthetic data and neural data\nfrom mouse visual cortex. The results show that our model not only accurately\ndecodes complex visual stimuli but also extracts explicit temporal neural\ndynamics, demonstrating that it builds latent representations more relevant to\nvisual stimulation.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Time-Dependent VAE for Building Latent Factor from Visual Neural Activity with Complex Dynamics\",\"authors\":\"Liwei Huang, ZhengYu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian\",\"doi\":\"arxiv-2408.07908\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Seeking high-quality neural latent representations to reveal the intrinsic\\ncorrelation between neural activity and behavior or sensory stimulation has\\nattracted much interest. Currently, some deep latent variable models rely on\\nbehavioral information (e.g., movement direction and position) as an aid to\\nbuild expressive embeddings while being restricted by fixed time scales. Visual\\nneural activity from passive viewing lacks clearly correlated behavior or task\\ninformation, and high-dimensional visual stimulation leads to intricate neural\\ndynamics. To cope with such conditions, we propose Time-Dependent SwapVAE,\\nfollowing the approach of separating content and style spaces in Swap-VAE, on\\nthe basis of which we introduce state variables to construct conditional\\ndistributions with temporal dependence for the above two spaces. Our model\\nprogressively generates latent variables along neural activity sequences, and\\nwe apply self-supervised contrastive learning to shape its latent space. In\\nthis way, it can effectively analyze complex neural dynamics from sequences of\\narbitrary length, even without task or behavioral data as auxiliary inputs. We\\ncompare TiDe-SwapVAE with alternative models on synthetic data and neural data\\nfrom mouse visual cortex. The results show that our model not only accurately\\ndecodes complex visual stimuli but also extracts explicit temporal neural\\ndynamics, demonstrating that it builds latent representations more relevant to\\nvisual stimulation.\",\"PeriodicalId\":501517,\"journal\":{\"name\":\"arXiv - QuanBio - Neurons and Cognition\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuanBio - Neurons and Cognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.07908\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.07908","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Time-Dependent VAE for Building Latent Factor from Visual Neural Activity with Complex Dynamics
Seeking high-quality neural latent representations to reveal the intrinsic
correlation between neural activity and behavior or sensory stimulation has
attracted much interest. Currently, some deep latent variable models rely on
behavioral information (e.g., movement direction and position) as an aid to
build expressive embeddings while being restricted by fixed time scales. Visual
neural activity from passive viewing lacks clearly correlated behavior or task
information, and high-dimensional visual stimulation leads to intricate neural
dynamics. To cope with such conditions, we propose Time-Dependent SwapVAE,
following the approach of separating content and style spaces in Swap-VAE, on
the basis of which we introduce state variables to construct conditional
distributions with temporal dependence for the above two spaces. Our model
progressively generates latent variables along neural activity sequences, and
we apply self-supervised contrastive learning to shape its latent space. In
this way, it can effectively analyze complex neural dynamics from sequences of
arbitrary length, even without task or behavioral data as auxiliary inputs. We
compare TiDe-SwapVAE with alternative models on synthetic data and neural data
from mouse visual cortex. The results show that our model not only accurately
decodes complex visual stimuli but also extracts explicit temporal neural
dynamics, demonstrating that it builds latent representations more relevant to
visual stimulation.