Murat Kucukosmanoglu, Javier O. Garcia, Justin Brooks, Kanika Bansal
{"title":"认知网络和性能利用 DNN 模型驱动基于 fMRI 的状态分类","authors":"Murat Kucukosmanoglu, Javier O. Garcia, Justin Brooks, Kanika Bansal","doi":"arxiv-2409.00003","DOIUrl":null,"url":null,"abstract":"Deep neural network (DNN) models have demonstrated impressive performance in\nvarious domains, yet their application in cognitive neuroscience is limited due\nto their lack of interpretability. In this study we employ two structurally\ndifferent and complementary DNN-based models, a one-dimensional convolutional\nneural network (1D-CNN) and a bidirectional long short-term memory network\n(BiLSTM), to classify individual cognitive states from fMRI BOLD data, with a\nfocus on understanding the cognitive underpinnings of the classification\ndecisions. We show that despite the architectural differences, both models\nconsistently produce a robust relationship between prediction accuracy and\nindividual cognitive performance, such that low performance leads to poor\nprediction accuracy. To achieve model explainability, we used permutation\ntechniques to calculate feature importance, allowing us to identify the most\ncritical brain regions influencing model predictions. Across models, we found\nthe dominance of visual networks, suggesting that task-driven state differences\nare primarily encoded in visual processing. Attention and control networks also\nshowed relatively high importance, however, default mode and temporal-parietal\nnetworks demonstrated negligible contribution in differentiating cognitive\nstates. Additionally, we observed individual trait-based effects and subtle\nmodel-specific differences, such that 1D-CNN showed slightly better overall\nperformance, while BiLSTM showed better sensitivity for individual behavior;\nthese initial findings require further research and robustness testing to be\nfully established. Our work underscores the importance of explainable DNN\nmodels in uncovering the neural mechanisms underlying cognitive state\ntransitions, providing a foundation for future work in this domain.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cognitive Networks and Performance Drive fMRI-Based State Classification Using DNN Models\",\"authors\":\"Murat Kucukosmanoglu, Javier O. Garcia, Justin Brooks, Kanika Bansal\",\"doi\":\"arxiv-2409.00003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural network (DNN) models have demonstrated impressive performance in\\nvarious domains, yet their application in cognitive neuroscience is limited due\\nto their lack of interpretability. In this study we employ two structurally\\ndifferent and complementary DNN-based models, a one-dimensional convolutional\\nneural network (1D-CNN) and a bidirectional long short-term memory network\\n(BiLSTM), to classify individual cognitive states from fMRI BOLD data, with a\\nfocus on understanding the cognitive underpinnings of the classification\\ndecisions. We show that despite the architectural differences, both models\\nconsistently produce a robust relationship between prediction accuracy and\\nindividual cognitive performance, such that low performance leads to poor\\nprediction accuracy. To achieve model explainability, we used permutation\\ntechniques to calculate feature importance, allowing us to identify the most\\ncritical brain regions influencing model predictions. Across models, we found\\nthe dominance of visual networks, suggesting that task-driven state differences\\nare primarily encoded in visual processing. Attention and control networks also\\nshowed relatively high importance, however, default mode and temporal-parietal\\nnetworks demonstrated negligible contribution in differentiating cognitive\\nstates. Additionally, we observed individual trait-based effects and subtle\\nmodel-specific differences, such that 1D-CNN showed slightly better overall\\nperformance, while BiLSTM showed better sensitivity for individual behavior;\\nthese initial findings require further research and robustness testing to be\\nfully established. Our work underscores the importance of explainable DNN\\nmodels in uncovering the neural mechanisms underlying cognitive state\\ntransitions, providing a foundation for future work in this domain.\",\"PeriodicalId\":501517,\"journal\":{\"name\":\"arXiv - QuanBio - Neurons and Cognition\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuanBio - Neurons and Cognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.00003\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.00003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cognitive Networks and Performance Drive fMRI-Based State Classification Using DNN Models
Deep neural network (DNN) models have demonstrated impressive performance in
various domains, yet their application in cognitive neuroscience is limited due
to their lack of interpretability. In this study we employ two structurally
different and complementary DNN-based models, a one-dimensional convolutional
neural network (1D-CNN) and a bidirectional long short-term memory network
(BiLSTM), to classify individual cognitive states from fMRI BOLD data, with a
focus on understanding the cognitive underpinnings of the classification
decisions. We show that despite the architectural differences, both models
consistently produce a robust relationship between prediction accuracy and
individual cognitive performance, such that low performance leads to poor
prediction accuracy. To achieve model explainability, we used permutation
techniques to calculate feature importance, allowing us to identify the most
critical brain regions influencing model predictions. Across models, we found
the dominance of visual networks, suggesting that task-driven state differences
are primarily encoded in visual processing. Attention and control networks also
showed relatively high importance, however, default mode and temporal-parietal
networks demonstrated negligible contribution in differentiating cognitive
states. Additionally, we observed individual trait-based effects and subtle
model-specific differences, such that 1D-CNN showed slightly better overall
performance, while BiLSTM showed better sensitivity for individual behavior;
these initial findings require further research and robustness testing to be
fully established. Our work underscores the importance of explainable DNN
models in uncovering the neural mechanisms underlying cognitive state
transitions, providing a foundation for future work in this domain.