Xintao Hu, Cheng Lv, Gong Cheng, Jinglei Lv, Lei Guo, Junwei Han, Tianming Liu
{"title":"自然视频流中视觉显著性的稀疏约束fMRI解码","authors":"Xintao Hu, Cheng Lv, Gong Cheng, Jinglei Lv, Lei Guo, Junwei Han, Tianming Liu","doi":"10.1109/TAMD.2015.2409835","DOIUrl":null,"url":null,"abstract":"Naturalistic stimuli such as video watching have been increasingly used in functional magnetic resonance imaging (fMRI)-based brain encoding and decoding studies since they can provide real and dynamic information that the human brain has to process in everyday life. In this paper, we propose a sparsity-constrained decoding model to explore whether bottom-up visual saliency in continuous video streams can be effectively decoded by brain activity recorded by fMRI, and to examine whether sparsity constraints can improve visual saliency decoding. Specifically, we use a biologically-plausible computational model to quantify the visual saliency in video streams, and adopt a sparse representation algorithm to learn the atomic fMRI signal dictionaries that are representative of the patterns of whole-brain fMRI signals. Sparse representation also links the learned atomic dictionary with the quantified video saliency. Experimental results show that the temporal visual saliency in video stream can be well decoded and the sparse constraints can improve the performance of fMRI decoding models.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"65-75"},"PeriodicalIF":0.0000,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2409835","citationCount":"21","resultStr":"{\"title\":\"Sparsity-Constrained fMRI Decoding of Visual Saliency in Naturalistic Video Streams\",\"authors\":\"Xintao Hu, Cheng Lv, Gong Cheng, Jinglei Lv, Lei Guo, Junwei Han, Tianming Liu\",\"doi\":\"10.1109/TAMD.2015.2409835\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Naturalistic stimuli such as video watching have been increasingly used in functional magnetic resonance imaging (fMRI)-based brain encoding and decoding studies since they can provide real and dynamic information that the human brain has to process in everyday life. In this paper, we propose a sparsity-constrained decoding model to explore whether bottom-up visual saliency in continuous video streams can be effectively decoded by brain activity recorded by fMRI, and to examine whether sparsity constraints can improve visual saliency decoding. Specifically, we use a biologically-plausible computational model to quantify the visual saliency in video streams, and adopt a sparse representation algorithm to learn the atomic fMRI signal dictionaries that are representative of the patterns of whole-brain fMRI signals. Sparse representation also links the learned atomic dictionary with the quantified video saliency. Experimental results show that the temporal visual saliency in video stream can be well decoded and the sparse constraints can improve the performance of fMRI decoding models.\",\"PeriodicalId\":49193,\"journal\":{\"name\":\"IEEE Transactions on Autonomous Mental Development\",\"volume\":\"7 1\",\"pages\":\"65-75\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-03-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/TAMD.2015.2409835\",\"citationCount\":\"21\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Autonomous Mental Development\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TAMD.2015.2409835\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Autonomous Mental Development","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAMD.2015.2409835","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sparsity-Constrained fMRI Decoding of Visual Saliency in Naturalistic Video Streams
Naturalistic stimuli such as video watching have been increasingly used in functional magnetic resonance imaging (fMRI)-based brain encoding and decoding studies since they can provide real and dynamic information that the human brain has to process in everyday life. In this paper, we propose a sparsity-constrained decoding model to explore whether bottom-up visual saliency in continuous video streams can be effectively decoded by brain activity recorded by fMRI, and to examine whether sparsity constraints can improve visual saliency decoding. Specifically, we use a biologically-plausible computational model to quantify the visual saliency in video streams, and adopt a sparse representation algorithm to learn the atomic fMRI signal dictionaries that are representative of the patterns of whole-brain fMRI signals. Sparse representation also links the learned atomic dictionary with the quantified video saliency. Experimental results show that the temporal visual saliency in video stream can be well decoded and the sparse constraints can improve the performance of fMRI decoding models.