Pub Date : 2021-10-07eCollection Date: 2021-01-01DOI: 10.1093/nc/niab030
Vaibhav Tripathi, Pallavi Bharadwaj
Yoga as a practice and philosophy of life has been followed for more than 4500 years with known evidence of yogic practices in the Indus Valley Civilization. The last few decades have seen a resurgence in the utility of yoga and meditation as a practice with growing scientific evidence behind it. Significant scientific literature has been published, illustrating the benefits of yogic practices including 'asana', 'pranayama' and 'dhyana' on mental and physical well-being. Electrophysiological and recent functional magnetic resonance imaging (fMRI) studies have found explicit neural signatures for yogic practices. In this article, we present a review of the philosophy of yoga, based on the dualistic 'Sankhya' school, as applied to consciousness summarized by Patanjali in his yoga sutras followed by a discussion on the five 'vritti' (modulations of mind), the practice of 'pratyahara', 'dharana', 'dhyana', different states of 'samadhi', and 'samapatti'. We formulate the yogic theory of consciousness (YTC), a cohesive theory that can model both external modulations and internal states of the mind. We propose that attention, sleep and mind wandering should be understood as unique modulatory states of the mind. YTC allows us to model the external states, internal states of meditation, 'samadhi' and even the disorders of consciousness. Furthermore, we list some testable neuroscientific hypotheses that could be answered using YTC and analyse the benefits, outcomes and possible limitations.
{"title":"Neuroscience of the yogic theory of consciousness.","authors":"Vaibhav Tripathi, Pallavi Bharadwaj","doi":"10.1093/nc/niab030","DOIUrl":"10.1093/nc/niab030","url":null,"abstract":"<p><p>Yoga as a practice and philosophy of life has been followed for more than 4500 years with known evidence of yogic practices in the Indus Valley Civilization. The last few decades have seen a resurgence in the utility of yoga and meditation as a practice with growing scientific evidence behind it. Significant scientific literature has been published, illustrating the benefits of yogic practices including 'asana', 'pranayama' and 'dhyana' on mental and physical well-being. Electrophysiological and recent functional magnetic resonance imaging (fMRI) studies have found explicit neural signatures for yogic practices. In this article, we present a review of the philosophy of yoga, based on the dualistic 'Sankhya' school, as applied to consciousness summarized by Patanjali in his yoga sutras followed by a discussion on the five 'vritti' (modulations of mind), the practice of 'pratyahara', 'dharana', 'dhyana', different states of 'samadhi', and 'samapatti'. We formulate the yogic theory of consciousness (YTC), a cohesive theory that can model both external modulations and internal states of the mind. We propose that attention, sleep and mind wandering should be understood as unique modulatory states of the mind. YTC allows us to model the external states, internal states of meditation, 'samadhi' and even the disorders of consciousness. Furthermore, we list some testable neuroscientific hypotheses that could be answered using YTC and analyse the benefits, outcomes and possible limitations.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab030"},"PeriodicalIF":4.1,"publicationDate":"2021-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/d8/03/niab030.PMC8675243.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39739491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-21eCollection Date: 2021-01-01DOI: 10.1093/nc/niab022
Matteo Grasso, Andrew M Haun, Giulio Tononi
Neuroscience has made remarkable advances in accounting for how the brain performs its various functions. Consciousness, too, is usually approached in functional terms: the goal is to understand how the brain represents information, accesses that information, and acts on it. While useful for prediction, this functional, information-processing approach leaves out the subjective structure of experience: it does not account for how experience feels. Here, we consider a simple model of how a "grid-like" network meant to resemble posterior cortical areas can represent spatial information and act on it to perform a simple "fixation" function. Using standard neuroscience tools, we show how the model represents topographically the retinal position of a stimulus and triggers eye muscles to fixate or follow it. Encoding, decoding, and tuning functions of model units illustrate the working of the model in a way that fully explains what the model does. However, these functional properties have nothing to say about the fact that a human fixating a stimulus would also "see" it-experience it at a location in space. Using the tools of Integrated Information Theory, we then show how the subjective properties of experienced space-its extendedness-can be accounted for in objective, neuroscientific terms by the "cause-effect structure" specified by the grid-like cortical area. By contrast, a "map-like" network without lateral connections, meant to resemble a pretectal circuit, is functionally equivalent to the grid-like system with respect to representation, action, and fixation but cannot account for the phenomenal properties of space.
{"title":"Of maps and grids.","authors":"Matteo Grasso, Andrew M Haun, Giulio Tononi","doi":"10.1093/nc/niab022","DOIUrl":"10.1093/nc/niab022","url":null,"abstract":"<p><p>Neuroscience has made remarkable advances in accounting for how the brain performs its various functions. Consciousness, too, is usually approached in functional terms: the goal is to understand how the brain represents information, accesses that information, and acts on it. While useful for prediction, this functional, information-processing approach leaves out the subjective structure of experience: it does not account for how experience feels. Here, we consider a simple model of how a \"grid-like\" network meant to resemble posterior cortical areas can represent spatial information and act on it to perform a simple \"fixation\" function. Using standard neuroscience tools, we show how the model represents topographically the retinal position of a stimulus and triggers eye muscles to fixate or follow it. Encoding, decoding, and tuning functions of model units illustrate the working of the model in a way that fully explains what the model does. However, these functional properties have nothing to say about the fact that a human fixating a stimulus would also \"see\" it-experience it at a location in space. Using the tools of Integrated Information Theory, we then show how the subjective properties of experienced space-its extendedness-can be accounted for in objective, neuroscientific terms by the \"cause-effect structure\" specified by the grid-like cortical area. By contrast, a \"map-like\" network without lateral connections, meant to resemble a pretectal circuit, is functionally equivalent to the grid-like system with respect to representation, action, and fixation but cannot account for the phenomenal properties of space.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab022"},"PeriodicalIF":4.1,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8452603/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39444071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Sandved-Smith, C. Hesp, J. Mattout, K. Friston, A. Lutz, M. Ramstead
Meta-awareness refers to the capacity to explicitly notice the current content of consciousness and has been identified as a key component for the successful control of cognitive states, such as the deliberate direction of attention. This paper proposes a formal model of meta-awareness and attentional control using hierarchical active inference. To do so, we cast mental action as policy selection over higher-level cognitive states and add a further hierarchical level to model meta-awareness states that modulate the expected confidence (precision) in the mapping between observations and hidden cognitive states. We simulate the example of mind-wandering and its regulation during a task involving sustained selective attention on a perceptual object. This provides a computational case study for an inferential architecture that is apt to enable the emergence of these central components of human phenomenology, namely, the ability to access and control cognitive states. We propose that this approach can be generalized to other cognitive states, and hence, this paper provides the first steps towards the development of a computational phenomenology of mental action and more broadly of our ability to monitor and control our own cognitive states. Future steps of this work will focus on fitting the model with qualitative, behavioural, and neural data.
{"title":"Publisher’s note to: towards a computational phenomenology of mental action: modelling meta-awareness and attentional control with deep parametric active inference","authors":"L. Sandved-Smith, C. Hesp, J. Mattout, K. Friston, A. Lutz, M. Ramstead","doi":"10.1093/nc/niab035","DOIUrl":"https://doi.org/10.1093/nc/niab035","url":null,"abstract":"Meta-awareness refers to the capacity to explicitly notice the current content of consciousness and has been identified as a key component for the successful control of cognitive states, such as the deliberate direction of attention. This paper proposes a formal model of meta-awareness and attentional control using hierarchical active inference. To do so, we cast mental action as policy selection over higher-level cognitive states and add a further hierarchical level to model meta-awareness states that modulate the expected confidence (precision) in the mapping between observations and hidden cognitive states. We simulate the example of mind-wandering and its regulation during a task involving sustained selective attention on a perceptual object. This provides a computational case study for an inferential architecture that is apt to enable the emergence of these central components of human phenomenology, namely, the ability to access and control cognitive states. We propose that this approach can be generalized to other cognitive states, and hence, this paper provides the first steps towards the development of a computational phenomenology of mental action and more broadly of our ability to monitor and control our own cognitive states. Future steps of this work will focus on fitting the model with qualitative, behavioural, and neural data.","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2021-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41824652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-14eCollection Date: 2021-01-01DOI: 10.1093/nc/niab017
Paul Linton
We typically distinguish between V1 as an egocentric perceptual map and the hippocampus as an allocentric cognitive map. In this article, we argue that V1 also functions as a post-perceptual egocentric cognitive map. We argue that three well-documented functions of V1, namely (i) the estimation of distance, (ii) the estimation of size, and (iii) multisensory integration, are better understood as post-perceptual cognitive inferences. This argument has two important implications. First, we argue that V1 must function as the neural correlates of the visual perception/cognition distinction and suggest how this can be accommodated by V1's laminar structure. Second, we use this insight to propose a low-level account of visual consciousness in contrast to mid-level accounts (recurrent processing theory; integrated information theory) and higher-level accounts (higher-order thought; global workspace theory). Detection thresholds have been traditionally used to rule out such an approach, but we explain why it is a mistake to equate visibility (and therefore the presence/absence of visual experience) with detection thresholds.
{"title":"V1 as an egocentric cognitive map.","authors":"Paul Linton","doi":"10.1093/nc/niab017","DOIUrl":"10.1093/nc/niab017","url":null,"abstract":"<p><p>We typically distinguish between V1 as an egocentric perceptual map and the hippocampus as an allocentric cognitive map. In this article, we argue that V1 also functions as a post-perceptual egocentric cognitive map. We argue that three well-documented functions of V1, namely (i) the estimation of distance, (ii) the estimation of size, and (iii) multisensory integration, are better understood as post-perceptual cognitive inferences. This argument has two important implications. First, we argue that V1 must function as the neural correlates of the visual perception/cognition distinction and suggest how this can be accommodated by V1's laminar structure. Second, we use this insight to propose a low-level account of visual consciousness in contrast to mid-level accounts (recurrent processing theory; integrated information theory) and higher-level accounts (higher-order thought; global workspace theory). Detection thresholds have been traditionally used to rule out such an approach, but we explain why it is a mistake to equate visibility (and therefore the presence/absence of visual experience) with detection thresholds.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab017"},"PeriodicalIF":3.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8439394/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39444471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-03eCollection Date: 2021-01-01DOI: 10.1093/nc/niab028
Rafael Malach
While most theories of consciousness posit some kind of dependence on global network activities, I consider here an alternative, localist perspective-in which localized cortical regions each underlie the emergence of a unique category of conscious experience. Under this perspective, the large-scale activation often found in the cortex is a consequence of the complexity of typical conscious experiences rather than an obligatory condition for the emergence of conscious awareness-which can flexibly shift, depending on the richness of its contents, from local to more global activation patterns. This perspective fits a massive body of human imaging, recordings, lesions and stimulation data but opens a fundamental problem: how can the information, defining each content, be derived locally in each cortical region. Here, I will discuss a solution echoing pioneering structuralist ideas in which the content of a conscious experience is defined by its relationship to all other contents within an experiential category. In neuronal terms, this relationship structure between contents is embodied by the local geometry of similarity distances between cortical activation patterns generated during each conscious experience, likely mediated via networks of local neuronal connections. Thus, in order for any conscious experience to appear in an individual's mind, two central conditions must be met. First, a specific configural pattern ("bar-code") of neuronal activity must appear within a local relational geometry, i.e. a cortical area. Second, the individual neurons underlying the activated pattern must be bound into a unified functional ensemble through a burst of recurrent neuronal firing: local "ignitions".
{"title":"Local neuronal relational structures underlying the contents of human conscious experience.","authors":"Rafael Malach","doi":"10.1093/nc/niab028","DOIUrl":"10.1093/nc/niab028","url":null,"abstract":"<p><p>While most theories of consciousness posit some kind of dependence on global network activities, I consider here an alternative, localist perspective-in which localized cortical regions each underlie the emergence of a unique category of conscious experience. Under this perspective, the large-scale activation often found in the cortex is a consequence of the complexity of typical conscious experiences rather than an obligatory condition for the emergence of conscious awareness-which can flexibly shift, depending on the richness of its contents, from local to more global activation patterns. This perspective fits a massive body of human imaging, recordings, lesions and stimulation data but opens a fundamental problem: how can the information, defining each content, be derived locally in each cortical region. Here, I will discuss a solution echoing pioneering structuralist ideas in which the content of a conscious experience is defined by its relationship to all other contents within an experiential category. In neuronal terms, this relationship structure between contents is embodied by the local geometry of similarity distances between cortical activation patterns generated during each conscious experience, likely mediated via networks of local neuronal connections. Thus, in order for any conscious experience to appear in an individual's mind, two central conditions must be met. First, a specific configural pattern (\"bar-code\") of neuronal activity must appear within a local relational geometry, i.e. a cortical area. Second, the individual neurons underlying the activated pattern must be bound into a unified functional ensemble through a burst of recurrent neuronal firing: local \"ignitions\".</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab028"},"PeriodicalIF":4.1,"publicationDate":"2021-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a6/71/niab028.PMC8415036.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39409268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01eCollection Date: 2021-01-01DOI: 10.1093/nc/niab024
George Deane
Predictive processing approaches to brain function are increasingly delivering promise for illuminating the computational underpinnings of a wide range of phenomenological states. It remains unclear, however, whether predictive processing is equipped to accommodate a theory of consciousness itself. Furthermore, objectors have argued that without specification of the core computational mechanisms of consciousness, predictive processing is unable to inform the attribution of consciousness to other non-human (biological and artificial) systems. In this paper, I argue that an account of consciousness in the predictive brain is within reach via recent accounts of phenomenal self-modelling in the active inference framework. The central claim here is that phenomenal consciousness is underpinned by 'subjective valuation'-a deep inference about the precision or 'predictability' of the self-evidencing ('fitness-promoting') outcomes of action. Based on this account, I argue that this approach can critically inform the distribution of experience in other systems, paying particular attention to the complex sensory attenuation mechanisms associated with deep self-models. I then consider an objection to the account: several recent papers argue that theories of consciousness that invoke self-consciousness as constitutive or necessary for consciousness are undermined by states (or traits) of 'selflessness'; in particular the 'totally selfless' states of ego-dissolution occasioned by psychedelic drugs. Drawing on existing work that accounts for psychedelic-induced ego-dissolution in the active inference framework, I argue that these states do not threaten to undermine an active inference theory of consciousness. Instead, these accounts corroborate the view that subjective valuation is the constitutive facet of experience, and they highlight the potential of psychedelic research to inform consciousness science, computational psychiatry and computational phenomenology.
{"title":"Consciousness in active inference: Deep self-models, other minds, and the challenge of psychedelic-induced ego-dissolution.","authors":"George Deane","doi":"10.1093/nc/niab024","DOIUrl":"10.1093/nc/niab024","url":null,"abstract":"<p><p>Predictive processing approaches to brain function are increasingly delivering promise for illuminating the computational underpinnings of a wide range of phenomenological states. It remains unclear, however, whether predictive processing is equipped to accommodate a theory of consciousness itself. Furthermore, objectors have argued that without specification of the core computational mechanisms of consciousness, predictive processing is unable to inform the attribution of consciousness to other non-human (biological and artificial) systems. In this paper, I argue that an account of consciousness in the predictive brain is within reach via recent accounts of phenomenal self-modelling in the active inference framework. The central claim here is that phenomenal consciousness is underpinned by 'subjective valuation'-a deep inference about the precision or 'predictability' of the self-evidencing ('fitness-promoting') outcomes of action. Based on this account, I argue that this approach can critically inform the distribution of experience in other systems, paying particular attention to the complex sensory attenuation mechanisms associated with deep self-models. I then consider an objection to the account: several recent papers argue that theories of consciousness that invoke self-consciousness as constitutive or necessary for consciousness are undermined by states (or traits) of 'selflessness'; in particular the 'totally selfless' states of ego-dissolution occasioned by psychedelic drugs. Drawing on existing work that accounts for psychedelic-induced ego-dissolution in the active inference framework, I argue that these states do not threaten to undermine an active inference theory of consciousness. Instead, these accounts corroborate the view that subjective valuation is the constitutive facet of experience, and they highlight the potential of psychedelic research to inform consciousness science, computational psychiatry and computational phenomenology.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab024"},"PeriodicalIF":4.1,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8408766/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39386072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-27eCollection Date: 2021-01-01DOI: 10.1093/nc/niab018
Lars Sandved-Smith, Casper Hesp, Jérémie Mattout, Karl Friston, Antoine Lutz, Maxwell J D Ramstead
Meta-awareness refers to the capacity to explicitly notice the current content of consciousness and has been identified as a key component for the successful control of cognitive states, such as the deliberate direction of attention. This paper proposes a formal model of meta-awareness and attentional control using hierarchical active inference. To do so, we cast mental action as policy selection over higher-level cognitive states and add a further hierarchical level to model meta-awareness states that modulate the expected confidence (precision) in the mapping between observations and hidden cognitive states. We simulate the example of mind-wandering and its regulation during a task involving sustained selective attention on a perceptual object. This provides a computational case study for an inferential architecture that is apt to enable the emergence of these central components of human phenomenology, namely, the ability to access and control cognitive states. We propose that this approach can be generalized to other cognitive states, and hence, this paper provides the first steps towards the development of a computational phenomenology of mental action and more broadly of our ability to monitor and control our own cognitive states. Future steps of this work will focus on fitting the model with qualitative, behavioural, and neural data.
{"title":"Towards a computational phenomenology of mental action: modelling meta-awareness and attentional control with deep parametric active inference.","authors":"Lars Sandved-Smith, Casper Hesp, Jérémie Mattout, Karl Friston, Antoine Lutz, Maxwell J D Ramstead","doi":"10.1093/nc/niab018","DOIUrl":"10.1093/nc/niab018","url":null,"abstract":"<p><p>Meta-awareness refers to the capacity to explicitly notice the current content of consciousness and has been identified as a key component for the successful control of cognitive states, such as the deliberate direction of attention. This paper proposes a formal model of meta-awareness and attentional control using hierarchical active inference. To do so, we cast mental action as policy selection over higher-level cognitive states and add a further hierarchical level to model meta-awareness states that modulate the expected confidence (precision) in the mapping between observations and hidden cognitive states. We simulate the example of mind-wandering and its regulation during a task involving sustained selective attention on a perceptual object. This provides a computational case study for an inferential architecture that is apt to enable the emergence of these central components of human phenomenology, namely, the ability to access and control cognitive states. We propose that this approach can be generalized to other cognitive states, and hence, this paper provides the first steps towards the development of a computational phenomenology of mental action and more broadly of our ability to monitor and control our own cognitive states. Future steps of this work will focus on fitting the model with qualitative, behavioural, and neural data.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab018"},"PeriodicalIF":4.1,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/e0/28/niab018.PMC8396119.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39364764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-27eCollection Date: 2021-01-01DOI: 10.1093/nc/niab021
Camilo Miguel Signorelli, Joanna Szczotka, Robert Prentner
Models of consciousness aim to inspire new experimental protocols and aid interpretation of empirical evidence to reveal the structure of conscious experience. Nevertheless, no current model is univocally accepted on either theoretical or empirical grounds. Moreover, a straightforward comparison is difficult for conceptual reasons. In particular, we argue that different models explicitly or implicitly subscribe to different notions of what constitutes a satisfactory explanation, use different tools in their explanatory endeavours and even aim to explain very different phenomena. We thus present a framework to compare existing models in the field with respect to what we call their 'explanatory profiles'. We focus on the following minimal dimensions: mode of explanation, mechanisms of explanation and target of explanation. We also discuss the empirical consequences of the discussed discrepancies among models. This approach may eventually lead to identifying driving assumptions, theoretical commitments, experimental predictions and a better design of future testing experiments. Finally, our conclusion points to more integrative theoretical research, where axiomatic models may play a critical role in solving current theoretical and experimental contradictions.
{"title":"Explanatory profiles of models of consciousness - towards a systematic classification.","authors":"Camilo Miguel Signorelli, Joanna Szczotka, Robert Prentner","doi":"10.1093/nc/niab021","DOIUrl":"10.1093/nc/niab021","url":null,"abstract":"<p><p>Models of consciousness aim to inspire new experimental protocols and aid interpretation of empirical evidence to reveal the structure of conscious experience. Nevertheless, no current model is univocally accepted on either theoretical or empirical grounds. Moreover, a straightforward comparison is difficult for conceptual reasons. In particular, we argue that different models explicitly or implicitly subscribe to different notions of what constitutes a satisfactory explanation, use different tools in their explanatory endeavours and even aim to explain very different phenomena. We thus present a framework to compare existing models in the field with respect to what we call their 'explanatory profiles'. We focus on the following minimal dimensions: mode of explanation, mechanisms of explanation and target of explanation. We also discuss the empirical consequences of the discussed discrepancies among models. This approach may eventually lead to identifying driving assumptions, theoretical commitments, experimental predictions and a better design of future testing experiments. Finally, our conclusion points to more integrative theoretical research, where axiomatic models may play a critical role in solving current theoretical and experimental contradictions.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab021"},"PeriodicalIF":4.1,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/33/1d/niab021.PMC8396118.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39364767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-18eCollection Date: 2021-01-01DOI: 10.1093/nc/niab019
Simon Hviid Del Pin, Zuzanna Skóra, Kristian Sandberg, Morten Overgaard, Michał Wierzchoń
The theoretical landscape of scientific studies of consciousness has flourished. Today, even multiple versions of the same theory are sometimes available. To advance the field, these theories should be directly compared to determine which are better at predicting and explaining empirical data. Systematic inquiries of this sort are seen in many subfields in cognitive psychology and neuroscience, e.g. in working memory. Nonetheless, when we surveyed publications on consciousness research, we found that most focused on a single theory. When 'comparisons' happened, they were often verbal and non-systematic. This fact in itself could be a contributing reason for the lack of convergence between theories in consciousness research. In this paper, we focus on how to compare theories of consciousness to ensure that the comparisons are meaningful, e.g. whether their predictions are parallel or contrasting. We evaluate how theories are typically compared in consciousness research and related subdisciplines in cognitive psychology and neuroscience, and we provide an example of our approach. We then examine the different reasons why direct comparisons between theories are rarely seen. One possible explanation is the unique nature of the consciousness phenomenon. We conclude that the field should embrace this uniqueness, and we set out the features that a theory of consciousness should account for.
{"title":"Comparing theories of consciousness: why it matters and how to do it.","authors":"Simon Hviid Del Pin, Zuzanna Skóra, Kristian Sandberg, Morten Overgaard, Michał Wierzchoń","doi":"10.1093/nc/niab019","DOIUrl":"10.1093/nc/niab019","url":null,"abstract":"<p><p>The theoretical landscape of scientific studies of consciousness has flourished. Today, even multiple versions of the same theory are sometimes available. To advance the field, these theories should be directly compared to determine which are better at predicting and explaining empirical data. Systematic inquiries of this sort are seen in many subfields in cognitive psychology and neuroscience, e.g. in working memory. Nonetheless, when we surveyed publications on consciousness research, we found that most focused on a single theory. When 'comparisons' happened, they were often verbal and non-systematic. This fact in itself could be a contributing reason for the lack of convergence between theories in consciousness research. In this paper, we focus on how to compare theories of consciousness to ensure that the comparisons are meaningful, e.g. whether their predictions are parallel or contrasting. We evaluate how theories are typically compared in consciousness research and related subdisciplines in cognitive psychology and neuroscience, and we provide an example of our approach. We then examine the different reasons why direct comparisons between theories are rarely seen. One possible explanation is the unique nature of the consciousness phenomenon. We conclude that the field should embrace this uniqueness, and we set out the features that a theory of consciousness should account for.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab019"},"PeriodicalIF":4.1,"publicationDate":"2021-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8372971/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39348452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-12eCollection Date: 2021-01-01DOI: 10.1093/nc/niab020
Ishan Singhal, Narayanan Srinivasan
Temporality and the feeling of 'now' is a fundamental property of consciousness. Different conceptualizations of time-consciousness have argued that both the content of our experiences and the representations of those experiences evolve in time, or neither have temporal extension, or only content does. Accounting for these different positions, we propose a nested hierarchical model of multiple timescales that accounts for findings on timing of cognition and phenomenology of temporal experience. This framework hierarchically combines the three major philosophical positions on time-consciousness (i.e. cinematic, extensional and retentional) and presents a common basis for temporal experience. We detail the properties of these hierarchical levels and speculate how they could coexist mechanistically. We also place several findings on timing and temporal experience at different levels in this hierarchy and show how they can be brought together. Finally, the framework is used to derive novel predictions for both timing of our experiences and time perception. The theoretical framework offers a novel dynamic space that can bring together sub-fields of cognitive science like perception, attention, action and consciousness research in understanding and describing our experiences both in and of time.
{"title":"Time and time again: a multi-scale hierarchical framework for time-consciousness and timing of cognition.","authors":"Ishan Singhal, Narayanan Srinivasan","doi":"10.1093/nc/niab020","DOIUrl":"10.1093/nc/niab020","url":null,"abstract":"<p><p>Temporality and the feeling of 'now' is a fundamental property of consciousness. Different conceptualizations of time-consciousness have argued that both the content of our experiences and the representations of those experiences evolve in time, or neither have temporal extension, or only content does. Accounting for these different positions, we propose a nested hierarchical model of multiple timescales that accounts for findings on timing of cognition and phenomenology of temporal experience. This framework hierarchically combines the three major philosophical positions on time-consciousness (i.e. cinematic, extensional and retentional) and presents a common basis for temporal experience. We detail the properties of these hierarchical levels and speculate how they could coexist mechanistically. We also place several findings on timing and temporal experience at different levels in this hierarchy and show how they can be brought together. Finally, the framework is used to derive novel predictions for both timing of our experiences and time perception. The theoretical framework offers a novel dynamic space that can bring together sub-fields of cognitive science like perception, attention, action and consciousness research in understanding and describing our experiences both in and of time.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab020"},"PeriodicalIF":3.1,"publicationDate":"2021-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/89/54/niab020.PMC8358708.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39313168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}