Nagur Shareef Shaik, Teja Krishna Cherukuri, Vince D. Calhoun, Dong Hye Ye
Schizophrenia (SZ) is a severe brain disorder marked by diverse cognitive impairments, abnormalities in brain structure, function, and genetic factors. Its complex symptoms and overlap with other psychiatric conditions challenge traditional diagnostic methods, necessitating advanced systems to improve precision. Existing research studies have mostly focused on imaging data, such as structural and functional MRI, for SZ diagnosis. There has been less focus on the integration of genomic features despite their potential in identifying heritable SZ traits. In this study, we introduce a Multi-modal Imaging Genomics Transformer (MIGTrans), that attentively integrates genomics with structural and functional imaging data to capture SZ-related neuroanatomical and connectome abnormalities. MIGTrans demonstrated improved SZ classification performance with an accuracy of 86.05% (+/- 0.02), offering clear interpretations and identifying significant genomic locations and brain morphological/connectivity patterns associated with SZ.
{"title":"Multi-modal Imaging Genomics Transformer: Attentive Integration of Imaging with Genomic Biomarkers for Schizophrenia Classification","authors":"Nagur Shareef Shaik, Teja Krishna Cherukuri, Vince D. Calhoun, Dong Hye Ye","doi":"arxiv-2407.19385","DOIUrl":"https://doi.org/arxiv-2407.19385","url":null,"abstract":"Schizophrenia (SZ) is a severe brain disorder marked by diverse cognitive\u0000impairments, abnormalities in brain structure, function, and genetic factors.\u0000Its complex symptoms and overlap with other psychiatric conditions challenge\u0000traditional diagnostic methods, necessitating advanced systems to improve\u0000precision. Existing research studies have mostly focused on imaging data, such\u0000as structural and functional MRI, for SZ diagnosis. There has been less focus\u0000on the integration of genomic features despite their potential in identifying\u0000heritable SZ traits. In this study, we introduce a Multi-modal Imaging Genomics\u0000Transformer (MIGTrans), that attentively integrates genomics with structural\u0000and functional imaging data to capture SZ-related neuroanatomical and\u0000connectome abnormalities. MIGTrans demonstrated improved SZ classification\u0000performance with an accuracy of 86.05% (+/- 0.02), offering clear\u0000interpretations and identifying significant genomic locations and brain\u0000morphological/connectivity patterns associated with SZ.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chi-Sheng Chen, Samuel Yen-Chi Chen, Aidan Hung-Wen Tsai, Chun-Shu Wei
Electroencephalography (EEG) is a critical tool in neuroscience and clinical practice for monitoring and analyzing brain activity. Traditional neural network models, such as EEGNet, have achieved considerable success in decoding EEG signals but often struggle with the complexity and high dimensionality of the data. Recent advances in quantum computing present new opportunities to enhance machine learning models through quantum machine learning (QML) techniques. In this paper, we introduce Quantum-EEGNet (QEEGNet), a novel hybrid neural network that integrates quantum computing with the classical EEGNet architecture to improve EEG encoding and analysis, as a forward-looking approach, acknowledging that the results might not always surpass traditional methods but it shows its potential. QEEGNet incorporates quantum layers within the neural network, allowing it to capture more intricate patterns in EEG data and potentially offering computational advantages. We evaluate QEEGNet on a benchmark EEG dataset, BCI Competition IV 2a, demonstrating that it consistently outperforms traditional EEGNet on most of the subjects and other robustness to noise. Our results highlight the significant potential of quantum-enhanced neural networks in EEG analysis, suggesting new directions for both research and practical applications in the field.
{"title":"QEEGNet: Quantum Machine Learning for Enhanced Electroencephalography Encoding","authors":"Chi-Sheng Chen, Samuel Yen-Chi Chen, Aidan Hung-Wen Tsai, Chun-Shu Wei","doi":"arxiv-2407.19214","DOIUrl":"https://doi.org/arxiv-2407.19214","url":null,"abstract":"Electroencephalography (EEG) is a critical tool in neuroscience and clinical\u0000practice for monitoring and analyzing brain activity. Traditional neural\u0000network models, such as EEGNet, have achieved considerable success in decoding\u0000EEG signals but often struggle with the complexity and high dimensionality of\u0000the data. Recent advances in quantum computing present new opportunities to\u0000enhance machine learning models through quantum machine learning (QML)\u0000techniques. In this paper, we introduce Quantum-EEGNet (QEEGNet), a novel\u0000hybrid neural network that integrates quantum computing with the classical\u0000EEGNet architecture to improve EEG encoding and analysis, as a forward-looking\u0000approach, acknowledging that the results might not always surpass traditional\u0000methods but it shows its potential. QEEGNet incorporates quantum layers within\u0000the neural network, allowing it to capture more intricate patterns in EEG data\u0000and potentially offering computational advantages. We evaluate QEEGNet on a\u0000benchmark EEG dataset, BCI Competition IV 2a, demonstrating that it\u0000consistently outperforms traditional EEGNet on most of the subjects and other\u0000robustness to noise. Our results highlight the significant potential of\u0000quantum-enhanced neural networks in EEG analysis, suggesting new directions for\u0000both research and practical applications in the field.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karl Friston, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley, Thomas Parr
This paper describes a discrete state-space model -- and accompanying methods -- for generative modelling. This model generalises partially observed Markov decision processes to include paths as latent variables, rendering it suitable for active inference and learning in a dynamic setting. Specifically, we consider deep or hierarchical forms using the renormalisation group. The ensuing renormalising generative models (RGM) can be regarded as discrete homologues of deep convolutional neural networks or continuous state-space models in generalised coordinates of motion. By construction, these scale-invariant models can be used to learn compositionality over space and time, furnishing models of paths or orbits; i.e., events of increasing temporal depth and itinerancy. This technical note illustrates the automatic discovery, learning and deployment of RGMs using a series of applications. We start with image classification and then consider the compression and generation of movies and music. Finally, we apply the same variational principles to the learning of Atari-like games.
{"title":"From pixels to planning: scale-free active inference","authors":"Karl Friston, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley, Thomas Parr","doi":"arxiv-2407.20292","DOIUrl":"https://doi.org/arxiv-2407.20292","url":null,"abstract":"This paper describes a discrete state-space model -- and accompanying methods\u0000-- for generative modelling. This model generalises partially observed Markov\u0000decision processes to include paths as latent variables, rendering it suitable\u0000for active inference and learning in a dynamic setting. Specifically, we\u0000consider deep or hierarchical forms using the renormalisation group. The\u0000ensuing renormalising generative models (RGM) can be regarded as discrete\u0000homologues of deep convolutional neural networks or continuous state-space\u0000models in generalised coordinates of motion. By construction, these\u0000scale-invariant models can be used to learn compositionality over space and\u0000time, furnishing models of paths or orbits; i.e., events of increasing temporal\u0000depth and itinerancy. This technical note illustrates the automatic discovery,\u0000learning and deployment of RGMs using a series of applications. We start with\u0000image classification and then consider the compression and generation of movies\u0000and music. Finally, we apply the same variational principles to the learning of\u0000Atari-like games.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It has been proposed that there is a wave excitation in animal brains, whose role is to represent three dimensional local space in a working memory. Evidence for the wave comes from the mammalian thalamus, the central body of the insect brain, and from computational models of spatial cognition. This is described in related papers. I assess the Bayesian probability that the wave exists, from this evidence. The probability of the wave in the brain is robustly greater than 0.4. If there is such a wave, we may need to re-think our whole understanding of the brain, in a break from classical neuroscience. I ask other researchers to comment on the wave hypothesis and on this assessment. In a companion paper, I outline possible ways to test it.
{"title":"Assessing the Brain Wave Hypothesis: Call for Commentary","authors":"Robert Worden","doi":"arxiv-2408.04636","DOIUrl":"https://doi.org/arxiv-2408.04636","url":null,"abstract":"It has been proposed that there is a wave excitation in animal brains, whose\u0000role is to represent three dimensional local space in a working memory.\u0000Evidence for the wave comes from the mammalian thalamus, the central body of\u0000the insect brain, and from computational models of spatial cognition. This is\u0000described in related papers. I assess the Bayesian probability that the wave\u0000exists, from this evidence. The probability of the wave in the brain is\u0000robustly greater than 0.4. If there is such a wave, we may need to re-think our\u0000whole understanding of the brain, in a break from classical neuroscience. I ask\u0000other researchers to comment on the wave hypothesis and on this assessment. In\u0000a companion paper, I outline possible ways to test it.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in molecular and genetic research have identified a diverse range of brain tumor sub-types, shedding light on differences in their molecular mechanisms, heterogeneity, and origins. The present study performs whole-brain connectome analysis using diffusionweighted images. To achieve this, both graph theory and persistent homology - a prominent approach in topological data analysis are employed in order to quantify changes in the structural connectivity of the wholebrain connectome in subjects with brain tumors. Probabilistic tractography is used to map the number of streamlines connecting 84 distinct brain regions, as delineated by the Desikan-Killiany atlas from FreeSurfer. These streamline mappings form the connectome matrix, on which persistent homology based analysis and graph theoretical analysis are executed to evaluate the discriminatory power between tumor sub-types that include meningioma and glioma. A detailed statistical analysis is conducted on persistent homology-derived topological features and graphical features to identify the brain regions where differences between study groups are statistically significant (p < 0.05). For classification purpose, graph-based local features are utilized, achieving a highest accuracy of 88%. In classifying tumor sub-types, an accuracy of 80% is attained. The findings obtained from this study underscore the potential of persistent homology and graph theoretical analysis of the whole-brain connectome in detecting alterations in structural connectivity patterns specific to different types of brain tumors.
{"title":"Analyzing Brain Tumor Connectomics using Graphs and Persistent Homology","authors":"Debanjali Bhattacharya, Ninad Aithal, Manish Jayswal, Neelam Sinha","doi":"arxiv-2407.17938","DOIUrl":"https://doi.org/arxiv-2407.17938","url":null,"abstract":"Recent advances in molecular and genetic research have identified a diverse\u0000range of brain tumor sub-types, shedding light on differences in their\u0000molecular mechanisms, heterogeneity, and origins. The present study performs\u0000whole-brain connectome analysis using diffusionweighted images. To achieve\u0000this, both graph theory and persistent homology - a prominent approach in\u0000topological data analysis are employed in order to quantify changes in the\u0000structural connectivity of the wholebrain connectome in subjects with brain\u0000tumors. Probabilistic tractography is used to map the number of streamlines\u0000connecting 84 distinct brain regions, as delineated by the Desikan-Killiany\u0000atlas from FreeSurfer. These streamline mappings form the connectome matrix, on\u0000which persistent homology based analysis and graph theoretical analysis are\u0000executed to evaluate the discriminatory power between tumor sub-types that\u0000include meningioma and glioma. A detailed statistical analysis is conducted on\u0000persistent homology-derived topological features and graphical features to\u0000identify the brain regions where differences between study groups are\u0000statistically significant (p < 0.05). For classification purpose, graph-based\u0000local features are utilized, achieving a highest accuracy of 88%. In\u0000classifying tumor sub-types, an accuracy of 80% is attained. The findings\u0000obtained from this study underscore the potential of persistent homology and\u0000graph theoretical analysis of the whole-brain connectome in detecting\u0000alterations in structural connectivity patterns specific to different types of\u0000brain tumors.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141774339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Human-like Video Models (HVM-1), large-scale video models pretrained with nearly 5000 hours of curated human-like video data (mostly egocentric, temporally extended, continuous video recordings), using the spatiotemporal masked autoencoder (ST-MAE) algorithm. We release two 633M parameter models trained at spatial resolutions of 224x224 and 448x448 pixels. We evaluate the performance of these models in downstream few-shot video and image recognition tasks and compare them against a model pretrained with 1330 hours of short action-oriented video clips from YouTube (Kinetics-700). HVM-1 models perform competitively against the Kinetics-700 pretrained model in downstream evaluations despite substantial qualitative differences between the spatiotemporal characteristics of the corresponding pretraining datasets. HVM-1 models also learn more accurate and more robust object representations compared to models pretrained with the image-based MAE algorithm on the same data, demonstrating the potential benefits of learning to predict temporal regularities in natural videos for learning better object representations.
{"title":"HVM-1: Large-scale video models pretrained with nearly 5000 hours of human-like video data","authors":"A. Emin Orhan","doi":"arxiv-2407.18067","DOIUrl":"https://doi.org/arxiv-2407.18067","url":null,"abstract":"We introduce Human-like Video Models (HVM-1), large-scale video models\u0000pretrained with nearly 5000 hours of curated human-like video data (mostly\u0000egocentric, temporally extended, continuous video recordings), using the\u0000spatiotemporal masked autoencoder (ST-MAE) algorithm. We release two 633M\u0000parameter models trained at spatial resolutions of 224x224 and 448x448 pixels.\u0000We evaluate the performance of these models in downstream few-shot video and\u0000image recognition tasks and compare them against a model pretrained with 1330\u0000hours of short action-oriented video clips from YouTube (Kinetics-700). HVM-1\u0000models perform competitively against the Kinetics-700 pretrained model in\u0000downstream evaluations despite substantial qualitative differences between the\u0000spatiotemporal characteristics of the corresponding pretraining datasets. HVM-1\u0000models also learn more accurate and more robust object representations compared\u0000to models pretrained with the image-based MAE algorithm on the same data,\u0000demonstrating the potential benefits of learning to predict temporal\u0000regularities in natural videos for learning better object representations.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141774252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parkinson's Disease afflicts millions of individuals globally. Emerging as a promising brain rehabilitation therapy for Parkinson's Disease, Closed-loop Deep Brain Stimulation (CL-DBS) aims to alleviate motor symptoms. The CL-DBS system comprises an implanted battery-powered medical device in the chest that sends stimulation signals to the brains of patients. These electrical stimulation signals are delivered to targeted brain regions via electrodes, with the magnitude of stimuli adjustable. However, current CL-DBS systems utilize energy-inefficient approaches, including reinforcement learning, fuzzy interface, and field-programmable gate array (FPGA), among others. These approaches make the traditional CL-DBS system impractical for implanted and wearable medical devices. This research proposes a novel neuromorphic approach that builds upon Leaky Integrate and Fire neuron (LIF) controllers to adjust the magnitude of DBS electric signals according to the various severities of PD patients. Our neuromorphic controllers, on-off LIF controller, and dual LIF controller, successfully reduced the power consumption of CL-DBS systems by 19% and 56%, respectively. Meanwhile, the suppression efficiency increased by 4.7% and 6.77%. Additionally, to address the data scarcity of Parkinson's Disease symptoms, we built Parkinson's Disease datasets that include the raw neural activities from the subthalamic nucleus at beta oscillations, which are typical physiological biomarkers for Parkinson's Disease.
{"title":"Preliminary Results of Neuromorphic Controller Design and a Parkinson's Disease Dataset Building for Closed-Loop Deep Brain Stimulation","authors":"Ananna Biswas, Hongyu An","doi":"arxiv-2407.17756","DOIUrl":"https://doi.org/arxiv-2407.17756","url":null,"abstract":"Parkinson's Disease afflicts millions of individuals globally. Emerging as a\u0000promising brain rehabilitation therapy for Parkinson's Disease, Closed-loop\u0000Deep Brain Stimulation (CL-DBS) aims to alleviate motor symptoms. The CL-DBS\u0000system comprises an implanted battery-powered medical device in the chest that\u0000sends stimulation signals to the brains of patients. These electrical\u0000stimulation signals are delivered to targeted brain regions via electrodes,\u0000with the magnitude of stimuli adjustable. However, current CL-DBS systems\u0000utilize energy-inefficient approaches, including reinforcement learning, fuzzy\u0000interface, and field-programmable gate array (FPGA), among others. These\u0000approaches make the traditional CL-DBS system impractical for implanted and\u0000wearable medical devices. This research proposes a novel neuromorphic approach\u0000that builds upon Leaky Integrate and Fire neuron (LIF) controllers to adjust\u0000the magnitude of DBS electric signals according to the various severities of PD\u0000patients. Our neuromorphic controllers, on-off LIF controller, and dual LIF\u0000controller, successfully reduced the power consumption of CL-DBS systems by 19%\u0000and 56%, respectively. Meanwhile, the suppression efficiency increased by 4.7%\u0000and 6.77%. Additionally, to address the data scarcity of Parkinson's Disease\u0000symptoms, we built Parkinson's Disease datasets that include the raw neural\u0000activities from the subthalamic nucleus at beta oscillations, which are typical\u0000physiological biomarkers for Parkinson's Disease.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141774247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Smart, Stanislav Y. Shvartsman, Martin Mönnigmann
Habituation - a phenomenon in which a dynamical system exhibits a diminishing response to repeated stimulations that eventually recovers when the stimulus is withheld - is universally observed in living systems from animals to unicellular organisms. Despite its prevalence, generic mechanisms for this fundamental form of learning remain poorly defined. Drawing inspiration from prior work on systems that respond adaptively to step inputs, we study habituation from a nonlinear dynamics perspective. This approach enables us to formalize classical hallmarks of habituation that have been experimentally identified in diverse organisms and stimulus scenarios. We use this framework to investigate distinct dynamical circuits capable of habituation. In particular, we show that driven linear dynamics of a memory variable with static nonlinearities acting at the input and output can implement numerous hallmarks in a mathematically interpretable manner. This work establishes a foundation for understanding the dynamical substrates of this primitive learning behavior and offers a blueprint for the identification of habituating circuits in biological systems.
{"title":"Minimal motifs for habituating systems","authors":"Matthew Smart, Stanislav Y. Shvartsman, Martin Mönnigmann","doi":"arxiv-2407.18204","DOIUrl":"https://doi.org/arxiv-2407.18204","url":null,"abstract":"Habituation - a phenomenon in which a dynamical system exhibits a diminishing\u0000response to repeated stimulations that eventually recovers when the stimulus is\u0000withheld - is universally observed in living systems from animals to\u0000unicellular organisms. Despite its prevalence, generic mechanisms for this\u0000fundamental form of learning remain poorly defined. Drawing inspiration from\u0000prior work on systems that respond adaptively to step inputs, we study\u0000habituation from a nonlinear dynamics perspective. This approach enables us to\u0000formalize classical hallmarks of habituation that have been experimentally\u0000identified in diverse organisms and stimulus scenarios. We use this framework\u0000to investigate distinct dynamical circuits capable of habituation. In\u0000particular, we show that driven linear dynamics of a memory variable with\u0000static nonlinearities acting at the input and output can implement numerous\u0000hallmarks in a mathematically interpretable manner. This work establishes a\u0000foundation for understanding the dynamical substrates of this primitive\u0000learning behavior and offers a blueprint for the identification of habituating\u0000circuits in biological systems.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141774248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Use-dependent bias is a phenomenon in human sensorimotor behavior whereby movements become biased towards previously repeated actions. Despite being well-documented, the reason why this phenomenon occurs is not year clearly understood. Here, we propose that use-dependent biases can be understood as a rational strategy for movement under limitations on the capacity to process sensory information to guide motor output. We adopt an information-theoretic approach to characterize sensorimotor information processing and determine how behavior should be optimized given limitations to this capacity. We show that this theory naturally predicts the existence of use-dependent biases. Our framework also generates two further predictions. The first prediction relates to handedness. The dominant hand is associated with enhanced dexterity and reduced movement variability compared to the non-dominant hand, which we propose relates to a greater capacity for information processing in regions that control movement of the dominant hand. Consequently, the dominant hand should exhibit smaller use-dependent biases compared to the non-dominant hand. The second prediction relates to how use-dependent biases are affected by movement speed. When moving faster, it is more challenging to correct for initial movement errors online during the movement. This should exacerbate costs associated with initial directional error and, according to our theory, reduce the extent of use-dependent biases compared to slower movements, and vice versa. We show that these two empirical predictions, the handedness effect and the speed-dependent effect, are confirmed by experimental data.
{"title":"Use-dependent Biases as Optimal Action under Information Bottleneck","authors":"Hokin X. Deng, Adrian M. Haith","doi":"arxiv-2407.17793","DOIUrl":"https://doi.org/arxiv-2407.17793","url":null,"abstract":"Use-dependent bias is a phenomenon in human sensorimotor behavior whereby\u0000movements become biased towards previously repeated actions. Despite being\u0000well-documented, the reason why this phenomenon occurs is not year clearly\u0000understood. Here, we propose that use-dependent biases can be understood as a\u0000rational strategy for movement under limitations on the capacity to process\u0000sensory information to guide motor output. We adopt an information-theoretic\u0000approach to characterize sensorimotor information processing and determine how\u0000behavior should be optimized given limitations to this capacity. We show that\u0000this theory naturally predicts the existence of use-dependent biases. Our\u0000framework also generates two further predictions. The first prediction relates\u0000to handedness. The dominant hand is associated with enhanced dexterity and\u0000reduced movement variability compared to the non-dominant hand, which we\u0000propose relates to a greater capacity for information processing in regions\u0000that control movement of the dominant hand. Consequently, the dominant hand\u0000should exhibit smaller use-dependent biases compared to the non-dominant hand.\u0000The second prediction relates to how use-dependent biases are affected by\u0000movement speed. When moving faster, it is more challenging to correct for\u0000initial movement errors online during the movement. This should exacerbate\u0000costs associated with initial directional error and, according to our theory,\u0000reduce the extent of use-dependent biases compared to slower movements, and\u0000vice versa. We show that these two empirical predictions, the handedness effect\u0000and the speed-dependent effect, are confirmed by experimental data.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"355 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this work is to present a mathematical framework for the study of flickering inputs in visual processing tasks. When combined with geometric patterns, these inputs influence and induce interesting psychophysical phenomena, such as the MacKay and the Billock-Tsou effects, where the subjects perceive specific afterimages typically modulated by the flickering frequency. Due to the symmetry-breaking structure of the inputs, classical bifurcation theory and multi-scale analysis techniques are not very effective in our context. We thus take an approach based on the input-output framework of control theory for Amari-type neural fields. This allows us to prove that, when driven by periodic inputs, the dynamics converge to a periodic state. Moreover, we study under which assumptions these nonlinear dynamics can be effectively linearised, and in this case we present a precise approximation of the integral kernel for short-range excitatory and long-range inhibitory neuronal interactions. Finally, for inputs concentrated at the center of the visual field with a flickering background, we directly relate the width of the illusory contours appearing in the afterimage with both the flickering frequency and the strength of the inhibition.
{"title":"Neural field equations with time-periodic external inputs and some applications to visual processing","authors":"Maria Virginia Bolelli, Dario Prandi","doi":"arxiv-2407.17294","DOIUrl":"https://doi.org/arxiv-2407.17294","url":null,"abstract":"The aim of this work is to present a mathematical framework for the study of\u0000flickering inputs in visual processing tasks. When combined with geometric\u0000patterns, these inputs influence and induce interesting psychophysical\u0000phenomena, such as the MacKay and the Billock-Tsou effects, where the subjects\u0000perceive specific afterimages typically modulated by the flickering frequency.\u0000Due to the symmetry-breaking structure of the inputs, classical bifurcation\u0000theory and multi-scale analysis techniques are not very effective in our\u0000context. We thus take an approach based on the input-output framework of\u0000control theory for Amari-type neural fields. This allows us to prove that, when\u0000driven by periodic inputs, the dynamics converge to a periodic state. Moreover,\u0000we study under which assumptions these nonlinear dynamics can be effectively\u0000linearised, and in this case we present a precise approximation of the integral\u0000kernel for short-range excitatory and long-range inhibitory neuronal\u0000interactions. Finally, for inputs concentrated at the center of the visual\u0000field with a flickering background, we directly relate the width of the\u0000illusory contours appearing in the afterimage with both the flickering\u0000frequency and the strength of the inhibition.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141774343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}