Pub Date : 2021-09-01eCollection Date: 2021-01-01DOI: 10.1093/nc/niab024
George Deane
Predictive processing approaches to brain function are increasingly delivering promise for illuminating the computational underpinnings of a wide range of phenomenological states. It remains unclear, however, whether predictive processing is equipped to accommodate a theory of consciousness itself. Furthermore, objectors have argued that without specification of the core computational mechanisms of consciousness, predictive processing is unable to inform the attribution of consciousness to other non-human (biological and artificial) systems. In this paper, I argue that an account of consciousness in the predictive brain is within reach via recent accounts of phenomenal self-modelling in the active inference framework. The central claim here is that phenomenal consciousness is underpinned by 'subjective valuation'-a deep inference about the precision or 'predictability' of the self-evidencing ('fitness-promoting') outcomes of action. Based on this account, I argue that this approach can critically inform the distribution of experience in other systems, paying particular attention to the complex sensory attenuation mechanisms associated with deep self-models. I then consider an objection to the account: several recent papers argue that theories of consciousness that invoke self-consciousness as constitutive or necessary for consciousness are undermined by states (or traits) of 'selflessness'; in particular the 'totally selfless' states of ego-dissolution occasioned by psychedelic drugs. Drawing on existing work that accounts for psychedelic-induced ego-dissolution in the active inference framework, I argue that these states do not threaten to undermine an active inference theory of consciousness. Instead, these accounts corroborate the view that subjective valuation is the constitutive facet of experience, and they highlight the potential of psychedelic research to inform consciousness science, computational psychiatry and computational phenomenology.
{"title":"Consciousness in active inference: Deep self-models, other minds, and the challenge of psychedelic-induced ego-dissolution.","authors":"George Deane","doi":"10.1093/nc/niab024","DOIUrl":"10.1093/nc/niab024","url":null,"abstract":"<p><p>Predictive processing approaches to brain function are increasingly delivering promise for illuminating the computational underpinnings of a wide range of phenomenological states. It remains unclear, however, whether predictive processing is equipped to accommodate a theory of consciousness itself. Furthermore, objectors have argued that without specification of the core computational mechanisms of consciousness, predictive processing is unable to inform the attribution of consciousness to other non-human (biological and artificial) systems. In this paper, I argue that an account of consciousness in the predictive brain is within reach via recent accounts of phenomenal self-modelling in the active inference framework. The central claim here is that phenomenal consciousness is underpinned by 'subjective valuation'-a deep inference about the precision or 'predictability' of the self-evidencing ('fitness-promoting') outcomes of action. Based on this account, I argue that this approach can critically inform the distribution of experience in other systems, paying particular attention to the complex sensory attenuation mechanisms associated with deep self-models. I then consider an objection to the account: several recent papers argue that theories of consciousness that invoke self-consciousness as constitutive or necessary for consciousness are undermined by states (or traits) of 'selflessness'; in particular the 'totally selfless' states of ego-dissolution occasioned by psychedelic drugs. Drawing on existing work that accounts for psychedelic-induced ego-dissolution in the active inference framework, I argue that these states do not threaten to undermine an active inference theory of consciousness. Instead, these accounts corroborate the view that subjective valuation is the constitutive facet of experience, and they highlight the potential of psychedelic research to inform consciousness science, computational psychiatry and computational phenomenology.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab024"},"PeriodicalIF":4.1,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8408766/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39386072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-27eCollection Date: 2021-01-01DOI: 10.1093/nc/niab018
Lars Sandved-Smith, Casper Hesp, Jérémie Mattout, Karl Friston, Antoine Lutz, Maxwell J D Ramstead
Meta-awareness refers to the capacity to explicitly notice the current content of consciousness and has been identified as a key component for the successful control of cognitive states, such as the deliberate direction of attention. This paper proposes a formal model of meta-awareness and attentional control using hierarchical active inference. To do so, we cast mental action as policy selection over higher-level cognitive states and add a further hierarchical level to model meta-awareness states that modulate the expected confidence (precision) in the mapping between observations and hidden cognitive states. We simulate the example of mind-wandering and its regulation during a task involving sustained selective attention on a perceptual object. This provides a computational case study for an inferential architecture that is apt to enable the emergence of these central components of human phenomenology, namely, the ability to access and control cognitive states. We propose that this approach can be generalized to other cognitive states, and hence, this paper provides the first steps towards the development of a computational phenomenology of mental action and more broadly of our ability to monitor and control our own cognitive states. Future steps of this work will focus on fitting the model with qualitative, behavioural, and neural data.
{"title":"Towards a computational phenomenology of mental action: modelling meta-awareness and attentional control with deep parametric active inference.","authors":"Lars Sandved-Smith, Casper Hesp, Jérémie Mattout, Karl Friston, Antoine Lutz, Maxwell J D Ramstead","doi":"10.1093/nc/niab018","DOIUrl":"10.1093/nc/niab018","url":null,"abstract":"<p><p>Meta-awareness refers to the capacity to explicitly notice the current content of consciousness and has been identified as a key component for the successful control of cognitive states, such as the deliberate direction of attention. This paper proposes a formal model of meta-awareness and attentional control using hierarchical active inference. To do so, we cast mental action as policy selection over higher-level cognitive states and add a further hierarchical level to model meta-awareness states that modulate the expected confidence (precision) in the mapping between observations and hidden cognitive states. We simulate the example of mind-wandering and its regulation during a task involving sustained selective attention on a perceptual object. This provides a computational case study for an inferential architecture that is apt to enable the emergence of these central components of human phenomenology, namely, the ability to access and control cognitive states. We propose that this approach can be generalized to other cognitive states, and hence, this paper provides the first steps towards the development of a computational phenomenology of mental action and more broadly of our ability to monitor and control our own cognitive states. Future steps of this work will focus on fitting the model with qualitative, behavioural, and neural data.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab018"},"PeriodicalIF":4.1,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/e0/28/niab018.PMC8396119.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39364764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-27eCollection Date: 2021-01-01DOI: 10.1093/nc/niab021
Camilo Miguel Signorelli, Joanna Szczotka, Robert Prentner
Models of consciousness aim to inspire new experimental protocols and aid interpretation of empirical evidence to reveal the structure of conscious experience. Nevertheless, no current model is univocally accepted on either theoretical or empirical grounds. Moreover, a straightforward comparison is difficult for conceptual reasons. In particular, we argue that different models explicitly or implicitly subscribe to different notions of what constitutes a satisfactory explanation, use different tools in their explanatory endeavours and even aim to explain very different phenomena. We thus present a framework to compare existing models in the field with respect to what we call their 'explanatory profiles'. We focus on the following minimal dimensions: mode of explanation, mechanisms of explanation and target of explanation. We also discuss the empirical consequences of the discussed discrepancies among models. This approach may eventually lead to identifying driving assumptions, theoretical commitments, experimental predictions and a better design of future testing experiments. Finally, our conclusion points to more integrative theoretical research, where axiomatic models may play a critical role in solving current theoretical and experimental contradictions.
{"title":"Explanatory profiles of models of consciousness - towards a systematic classification.","authors":"Camilo Miguel Signorelli, Joanna Szczotka, Robert Prentner","doi":"10.1093/nc/niab021","DOIUrl":"10.1093/nc/niab021","url":null,"abstract":"<p><p>Models of consciousness aim to inspire new experimental protocols and aid interpretation of empirical evidence to reveal the structure of conscious experience. Nevertheless, no current model is univocally accepted on either theoretical or empirical grounds. Moreover, a straightforward comparison is difficult for conceptual reasons. In particular, we argue that different models explicitly or implicitly subscribe to different notions of what constitutes a satisfactory explanation, use different tools in their explanatory endeavours and even aim to explain very different phenomena. We thus present a framework to compare existing models in the field with respect to what we call their 'explanatory profiles'. We focus on the following minimal dimensions: mode of explanation, mechanisms of explanation and target of explanation. We also discuss the empirical consequences of the discussed discrepancies among models. This approach may eventually lead to identifying driving assumptions, theoretical commitments, experimental predictions and a better design of future testing experiments. Finally, our conclusion points to more integrative theoretical research, where axiomatic models may play a critical role in solving current theoretical and experimental contradictions.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab021"},"PeriodicalIF":4.1,"publicationDate":"2021-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/33/1d/niab021.PMC8396118.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39364767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-18eCollection Date: 2021-01-01DOI: 10.1093/nc/niab019
Simon Hviid Del Pin, Zuzanna Skóra, Kristian Sandberg, Morten Overgaard, Michał Wierzchoń
The theoretical landscape of scientific studies of consciousness has flourished. Today, even multiple versions of the same theory are sometimes available. To advance the field, these theories should be directly compared to determine which are better at predicting and explaining empirical data. Systematic inquiries of this sort are seen in many subfields in cognitive psychology and neuroscience, e.g. in working memory. Nonetheless, when we surveyed publications on consciousness research, we found that most focused on a single theory. When 'comparisons' happened, they were often verbal and non-systematic. This fact in itself could be a contributing reason for the lack of convergence between theories in consciousness research. In this paper, we focus on how to compare theories of consciousness to ensure that the comparisons are meaningful, e.g. whether their predictions are parallel or contrasting. We evaluate how theories are typically compared in consciousness research and related subdisciplines in cognitive psychology and neuroscience, and we provide an example of our approach. We then examine the different reasons why direct comparisons between theories are rarely seen. One possible explanation is the unique nature of the consciousness phenomenon. We conclude that the field should embrace this uniqueness, and we set out the features that a theory of consciousness should account for.
{"title":"Comparing theories of consciousness: why it matters and how to do it.","authors":"Simon Hviid Del Pin, Zuzanna Skóra, Kristian Sandberg, Morten Overgaard, Michał Wierzchoń","doi":"10.1093/nc/niab019","DOIUrl":"10.1093/nc/niab019","url":null,"abstract":"<p><p>The theoretical landscape of scientific studies of consciousness has flourished. Today, even multiple versions of the same theory are sometimes available. To advance the field, these theories should be directly compared to determine which are better at predicting and explaining empirical data. Systematic inquiries of this sort are seen in many subfields in cognitive psychology and neuroscience, e.g. in working memory. Nonetheless, when we surveyed publications on consciousness research, we found that most focused on a single theory. When 'comparisons' happened, they were often verbal and non-systematic. This fact in itself could be a contributing reason for the lack of convergence between theories in consciousness research. In this paper, we focus on how to compare theories of consciousness to ensure that the comparisons are meaningful, e.g. whether their predictions are parallel or contrasting. We evaluate how theories are typically compared in consciousness research and related subdisciplines in cognitive psychology and neuroscience, and we provide an example of our approach. We then examine the different reasons why direct comparisons between theories are rarely seen. One possible explanation is the unique nature of the consciousness phenomenon. We conclude that the field should embrace this uniqueness, and we set out the features that a theory of consciousness should account for.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab019"},"PeriodicalIF":4.1,"publicationDate":"2021-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8372971/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39348452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-12eCollection Date: 2021-01-01DOI: 10.1093/nc/niab020
Ishan Singhal, Narayanan Srinivasan
Temporality and the feeling of 'now' is a fundamental property of consciousness. Different conceptualizations of time-consciousness have argued that both the content of our experiences and the representations of those experiences evolve in time, or neither have temporal extension, or only content does. Accounting for these different positions, we propose a nested hierarchical model of multiple timescales that accounts for findings on timing of cognition and phenomenology of temporal experience. This framework hierarchically combines the three major philosophical positions on time-consciousness (i.e. cinematic, extensional and retentional) and presents a common basis for temporal experience. We detail the properties of these hierarchical levels and speculate how they could coexist mechanistically. We also place several findings on timing and temporal experience at different levels in this hierarchy and show how they can be brought together. Finally, the framework is used to derive novel predictions for both timing of our experiences and time perception. The theoretical framework offers a novel dynamic space that can bring together sub-fields of cognitive science like perception, attention, action and consciousness research in understanding and describing our experiences both in and of time.
{"title":"Time and time again: a multi-scale hierarchical framework for time-consciousness and timing of cognition.","authors":"Ishan Singhal, Narayanan Srinivasan","doi":"10.1093/nc/niab020","DOIUrl":"10.1093/nc/niab020","url":null,"abstract":"<p><p>Temporality and the feeling of 'now' is a fundamental property of consciousness. Different conceptualizations of time-consciousness have argued that both the content of our experiences and the representations of those experiences evolve in time, or neither have temporal extension, or only content does. Accounting for these different positions, we propose a nested hierarchical model of multiple timescales that accounts for findings on timing of cognition and phenomenology of temporal experience. This framework hierarchically combines the three major philosophical positions on time-consciousness (i.e. cinematic, extensional and retentional) and presents a common basis for temporal experience. We detail the properties of these hierarchical levels and speculate how they could coexist mechanistically. We also place several findings on timing and temporal experience at different levels in this hierarchy and show how they can be brought together. Finally, the framework is used to derive novel predictions for both timing of our experiences and time perception. The theoretical framework offers a novel dynamic space that can bring together sub-fields of cognitive science like perception, attention, action and consciousness research in understanding and describing our experiences both in and of time.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab020"},"PeriodicalIF":3.1,"publicationDate":"2021-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/89/54/niab020.PMC8358708.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39313168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-05eCollection Date: 2021-01-01DOI: 10.1093/nc/niab014
Jake R Hanson, Sara I Walker
The scientific study of consciousness is currently undergoing a critical transition in the form of a rapidly evolving scientific debate regarding whether or not currently proposed theories can be assessed for their scientific validity. At the forefront of this debate is Integrated Information Theory (IIT), widely regarded as the preeminent theory of consciousness because it quantified subjective experience in a scalar mathematical measure called that is in principle measurable. Epistemological issues in the form of the "unfolding argument" have provided a concrete refutation of IIT by demonstrating how it permits functionally identical systems to have differences in their predicted consciousness. The implication is that IIT and any other proposed theory based on a physical system's causal structure may already be falsified even in the absence of experimental refutation. However, so far many of these arguments surrounding the epistemological foundations of falsification arguments, such as the unfolding argument, are too abstract to determine the full scope of their implications. Here, we make these abstract arguments concrete, by providing a simple example of functionally equivalent machines realizable with table-top electronics that take the form of isomorphic digital circuits with and without feedback. This allows us to explicitly demonstrate the different levels of abstraction at which a theory of consciousness can be assessed. Within this computational hierarchy, we show how IIT is simultaneously falsified at the finite-state automaton level and unfalsifiable at the combinatorial-state automaton level. We use this example to illustrate a more general set of falsification criteria for theories of consciousness: to avoid being already falsified, or conversely unfalsifiable, scientific theories of consciousness must be invariant with respect to changes that leave the inference procedure fixed at a particular level in a computational hierarchy.
{"title":"Formalizing falsification for theories of consciousness across computational hierarchies.","authors":"Jake R Hanson, Sara I Walker","doi":"10.1093/nc/niab014","DOIUrl":"10.1093/nc/niab014","url":null,"abstract":"<p><p>The scientific study of consciousness is currently undergoing a critical transition in the form of a rapidly evolving scientific debate regarding whether or not currently proposed theories can be assessed for their scientific validity. At the forefront of this debate is Integrated Information Theory (IIT), widely regarded as the preeminent theory of consciousness because it quantified subjective experience in a scalar mathematical measure called <math><mo>Φ</mo></math> that is in principle measurable. Epistemological issues in the form of the \"unfolding argument\" have provided a concrete refutation of IIT by demonstrating how it permits functionally identical systems to have differences in their predicted consciousness. The implication is that IIT and any other proposed theory based on a physical system's causal structure may already be falsified even in the absence of experimental refutation. However, so far many of these arguments surrounding the epistemological foundations of falsification arguments, such as the unfolding argument, are too abstract to determine the full scope of their implications. Here, we make these abstract arguments concrete, by providing a simple example of functionally equivalent machines realizable with table-top electronics that take the form of isomorphic digital circuits with and without feedback. This allows us to explicitly demonstrate the different levels of abstraction at which a theory of consciousness can be assessed. Within this computational hierarchy, we show how IIT is simultaneously falsified at the finite-state automaton level and unfalsifiable at the combinatorial-state automaton level. We use this example to illustrate a more general set of falsification criteria for theories of consciousness: to avoid being already falsified, or conversely unfalsifiable, scientific theories of consciousness must be invariant with respect to changes that leave the inference procedure fixed at a particular level in a computational hierarchy.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab014"},"PeriodicalIF":4.1,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/c3/ba/niab014.PMC8339439.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39299915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-04eCollection Date: 2021-01-01DOI: 10.1093/nc/niab016
Shira Baror, Biyu J He
Flipping through social media feeds, viewing exhibitions in a museum, or walking through the botanical gardens, people consistently choose to engage with and disengage from visual content. Yet, in most laboratory settings, the visual stimuli, their presentation duration, and the task at hand are all controlled by the researcher. Such settings largely overlook the spontaneous nature of human visual experience, in which perception takes place independently from specific task constraints and its time course is determined by the observer as a self-governing agent. Currently, much remains unknown about how spontaneous perceptual experiences unfold in the brain. Are all perceptual categories extracted during spontaneous perception? Does spontaneous perception inherently involve volition? Is spontaneous perception segmented into discrete episodes? How do different neural networks interact over time during spontaneous perception? These questions are imperative to understand our conscious visual experience in daily life. In this article we propose a framework for spontaneous perception. We first define spontaneous perception as a task-free and self-paced experience. We propose that spontaneous perception is guided by four organizing principles that grant it temporal and spatial structures. These principles include coarse-to-fine processing, continuity and segmentation, agency and volition, and associative processing. We provide key suggestions illustrating how these principles may interact with one another in guiding the multifaceted experience of spontaneous perception. We point to testable predictions derived from this framework, including (but not limited to) the roles of the default-mode network and slow cortical potentials in underlying spontaneous perception. We conclude by suggesting several outstanding questions for future research, extending the relevance of this framework to consciousness and spontaneous brain activity. In conclusion, the spontaneous perception framework proposed herein integrates components in human perception and cognition, which have been traditionally studied in isolation, and opens the door to understand how visual perception unfolds in its most natural context.
{"title":"Spontaneous perception: a framework for task-free, self-paced perception.","authors":"Shira Baror, Biyu J He","doi":"10.1093/nc/niab016","DOIUrl":"https://doi.org/10.1093/nc/niab016","url":null,"abstract":"<p><p>Flipping through social media feeds, viewing exhibitions in a museum, or walking through the botanical gardens, people consistently choose to engage with and disengage from visual content. Yet, in most laboratory settings, the visual stimuli, their presentation duration, and the task at hand are all controlled by the researcher. Such settings largely overlook the spontaneous nature of human visual experience, in which perception takes place independently from specific task constraints and its time course is determined by the observer as a self-governing agent. Currently, much remains unknown about how spontaneous perceptual experiences unfold in the brain. Are all perceptual categories extracted during spontaneous perception? Does spontaneous perception inherently involve volition? Is spontaneous perception segmented into discrete episodes? How do different neural networks interact over time during spontaneous perception? These questions are imperative to understand our conscious visual experience in daily life. In this article we propose a framework for spontaneous perception. We first define spontaneous perception as a task-free and self-paced experience. We propose that spontaneous perception is guided by four organizing principles that grant it temporal and spatial structures. These principles include coarse-to-fine processing, continuity and segmentation, agency and volition, and associative processing. We provide key suggestions illustrating how these principles may interact with one another in guiding the multifaceted experience of spontaneous perception. We point to testable predictions derived from this framework, including (but not limited to) the roles of the default-mode network and slow cortical potentials in underlying spontaneous perception. We conclude by suggesting several outstanding questions for future research, extending the relevance of this framework to consciousness and spontaneous brain activity. In conclusion, the spontaneous perception framework proposed herein integrates components in human perception and cognition, which have been traditionally studied in isolation, and opens the door to understand how visual perception unfolds in its most natural context.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab016"},"PeriodicalIF":4.1,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a1/75/niab016.PMC8333690.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39299919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-02eCollection Date: 2021-01-01DOI: 10.1093/nc/niab013
Chris Fields, James F Glazebrook, Michael Levin
Theories of consciousness and cognition that assume a neural substrate automatically regard phylogenetically basal, nonneural systems as nonconscious and noncognitive. Here, we advance a scale-free characterization of consciousness and cognition that regards basal systems, including synthetic constructs, as not only informative about the structure and function of experience in more complex systems but also as offering distinct advantages for experimental manipulation. Our "minimal physicalist" approach makes no assumptions beyond those of quantum information theory, and hence is applicable from the molecular scale upwards. We show that standard concepts including integrated information, state broadcasting via small-world networks, and hierarchical Bayesian inference emerge naturally in this setting, and that common phenomena including stigmergic memory, perceptual coarse-graining, and attention switching follow directly from the thermodynamic requirements of classical computation. We show that the self-representation that lies at the heart of human autonoetic awareness can be traced as far back as, and serves the same basic functions as, the stress response in bacteria and other basal systems.
{"title":"Minimal physicalism as a scale-free substrate for cognition and consciousness.","authors":"Chris Fields, James F Glazebrook, Michael Levin","doi":"10.1093/nc/niab013","DOIUrl":"10.1093/nc/niab013","url":null,"abstract":"<p><p>Theories of consciousness and cognition that assume a neural substrate automatically regard phylogenetically basal, nonneural systems as nonconscious and noncognitive. Here, we advance a scale-free characterization of consciousness and cognition that regards basal systems, including synthetic constructs, as not only informative about the structure and function of experience in more complex systems but also as offering distinct advantages for experimental manipulation. Our \"minimal physicalist\" approach makes no assumptions beyond those of quantum information theory, and hence is applicable from the molecular scale upwards. We show that standard concepts including integrated information, state broadcasting via small-world networks, and hierarchical Bayesian inference emerge naturally in this setting, and that common phenomena including stigmergic memory, perceptual coarse-graining, and attention switching follow directly from the thermodynamic requirements of classical computation. We show that the self-representation that lies at the heart of human autonoetic awareness can be traced as far back as, and serves the same basic functions as, the stress response in bacteria and other basal systems.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab013"},"PeriodicalIF":4.1,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8327199/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39273177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-16eCollection Date: 2021-01-01DOI: 10.1093/nc/niab012
Oren Kolodny, Roy Moyal, Shimon Edelman
Evolutionary accounts of feelings, and in particular of negative affect and of pain, assume that creatures that feel and care about the outcomes of their behavior outperform those that do not in terms of their evolutionary fitness. Such accounts, however, can only work if feelings can be shown to contribute to fitness-influencing outcomes. Simply assuming that a learner that feels and cares about outcomes is more strongly motivated than one that does is not enough, if only because motivation can be tied directly to outcomes by incorporating an appropriate reward function, without leaving any apparent role to feelings (as it is done in state-of-the-art engineered systems based on reinforcement learning). Here, we propose a possible mechanism whereby pain contributes to fitness: an actor-critic functional architecture for reinforcement learning, in which pain reflects the costs imposed on actors in their bidding for control, so as to promote honest signaling and ultimately help the system optimize learning and future behavior.
{"title":"A possible evolutionary function of phenomenal conscious experience of pain.","authors":"Oren Kolodny, Roy Moyal, Shimon Edelman","doi":"10.1093/nc/niab012","DOIUrl":"10.1093/nc/niab012","url":null,"abstract":"<p><p>Evolutionary accounts of feelings, and in particular of negative affect and of pain, assume that creatures that feel and care about the outcomes of their behavior outperform those that do not in terms of their evolutionary fitness. Such accounts, however, can only work if feelings can be shown to contribute to fitness-influencing outcomes. Simply assuming that a learner that feels and cares about outcomes is more strongly motivated than one that does is not enough, if only because motivation can be tied directly to outcomes by incorporating an appropriate reward function, without leaving any apparent role to feelings (as it is done in state-of-the-art engineered systems based on reinforcement learning). Here, we propose a possible mechanism whereby pain contributes to fitness: an actor-critic functional architecture for reinforcement learning, in which pain reflects the costs imposed on actors in their bidding for control, so as to promote honest signaling and ultimately help the system optimize learning and future behavior.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 2","pages":"niab012"},"PeriodicalIF":3.1,"publicationDate":"2021-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/e6/6a/niab012.PMC8206511.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39248953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-17eCollection Date: 2021-01-01DOI: 10.1093/nc/niab001
Johannes Kleiner, Erik Hoel
The search for a scientific theory of consciousness should result in theories that are falsifiable. However, here we show that falsification is especially problematic for theories of consciousness. We formally describe the standard experimental setup for testing these theories. Based on a theory's application to some physical system, such as the brain, testing requires comparing a theory's predicted experience (given some internal observables of the system like brain imaging data) with an inferred experience (using report or behavior). If there is a mismatch between inference and prediction, a theory is falsified. We show that if inference and prediction are independent, it follows that any minimally informative theory of consciousness is automatically falsified. This is deeply problematic since the field's reliance on report or behavior to infer conscious experiences implies such independence, so this fragility affects many contemporary theories of consciousness. Furthermore, we show that if inference and prediction are strictly dependent, it follows that a theory is unfalsifiable. This affects theories which claim consciousness to be determined by report or behavior. Finally, we explore possible ways out of this dilemma.
{"title":"Falsification and consciousness.","authors":"Johannes Kleiner, Erik Hoel","doi":"10.1093/nc/niab001","DOIUrl":"10.1093/nc/niab001","url":null,"abstract":"<p><p>The search for a scientific theory of consciousness should result in theories that are falsifiable. However, here we show that falsification is especially problematic for theories of consciousness. We formally describe the standard experimental setup for testing these theories. Based on a theory's application to some physical system, such as the brain, testing requires comparing a theory's predicted experience (given some internal observables of the system like brain imaging data) with an inferred experience (using report or behavior). If there is a mismatch between inference and prediction, a theory is falsified. We show that if inference and prediction are independent, it follows that any minimally informative theory of consciousness is automatically falsified. This is deeply problematic since the field's reliance on report or behavior to infer conscious experiences implies such independence, so this fragility affects many contemporary theories of consciousness. Furthermore, we show that if inference and prediction are strictly dependent, it follows that a theory is unfalsifiable. This affects theories which claim consciousness to be determined by report or behavior. Finally, we explore possible ways out of this dilemma.</p>","PeriodicalId":52242,"journal":{"name":"Neuroscience of Consciousness","volume":"2021 1","pages":"niab001"},"PeriodicalIF":3.1,"publicationDate":"2021-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8052953/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10296038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}