{"title":"PHENOMENAL AND ACCESS CONSCIOUSNESS AND THE \"HARD\" PROBLEM: A VIEW FROM THE DESIGNER STANCE","authors":"A. Sloman","doi":"10.1142/S1793843010000424","DOIUrl":null,"url":null,"abstract":"This paper is an attempt to summarise and justify critical comments I have been making over several decades about research on consciousness by philosophers, scientists and engineers. This includes (a) explaining why the concept of \"phenomenal consciousness\" (P-C), in the sense defined by Ned Block, is semantically flawed and unsuitable as a target for scientific research or machine modelling, whereas something like the concept of \"access consciousness\" (A-C) with which it is often contrasted refers to phenomena that can be described and explained within a future scientific theory, and (b) explaining why the \"hard problem\" is a bogus problem, because of its dependence on the P-C concept. It is compared with another bogus problem, \"the 'hard' problem of spatial identity\" introduced as part of a tutorial on semantically flawed concepts. Different types of semantic flaw and conceptual confusion not normally studied outside analytical philosophy are distinguished. The semantic flaws of the \"zombie\" argument, closely allied with the P-C concept are also explained. These topics are related both to the evolution of human and animal minds and brains and to requirements for human-like robots. The diversity of the phenomena related to the concept \"consciousness\" as ordinarily used makes it a polymorphic concept, partly analogous to concepts like \"efficient\", \"sensitive\", and \"impediment\" all of which need extra information to be provided before they can be applied to anything, and then the criteria of applicability differ. As a result there cannot be one explanation of consciousness, one set of neural associates of consciousness, one explanation for the evolution of consciousness, nor one machine model of consciousness. We need many of each. I present a way of making progress based on what McCarthy called \"the designer stance\", using facts about running virtual machines, without which current computers obviously could not work. I suggest the same is true of biological minds, because biological evolution long ago \"discovered\" a need for something like virtual machinery for self-monitoring and self-extending information-processing systems, and produced far more sophisticated versions than human engineers have so far achieved.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"84","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Machine Consciousness","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/S1793843010000424","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 84
Abstract
This paper is an attempt to summarise and justify critical comments I have been making over several decades about research on consciousness by philosophers, scientists and engineers. This includes (a) explaining why the concept of "phenomenal consciousness" (P-C), in the sense defined by Ned Block, is semantically flawed and unsuitable as a target for scientific research or machine modelling, whereas something like the concept of "access consciousness" (A-C) with which it is often contrasted refers to phenomena that can be described and explained within a future scientific theory, and (b) explaining why the "hard problem" is a bogus problem, because of its dependence on the P-C concept. It is compared with another bogus problem, "the 'hard' problem of spatial identity" introduced as part of a tutorial on semantically flawed concepts. Different types of semantic flaw and conceptual confusion not normally studied outside analytical philosophy are distinguished. The semantic flaws of the "zombie" argument, closely allied with the P-C concept are also explained. These topics are related both to the evolution of human and animal minds and brains and to requirements for human-like robots. The diversity of the phenomena related to the concept "consciousness" as ordinarily used makes it a polymorphic concept, partly analogous to concepts like "efficient", "sensitive", and "impediment" all of which need extra information to be provided before they can be applied to anything, and then the criteria of applicability differ. As a result there cannot be one explanation of consciousness, one set of neural associates of consciousness, one explanation for the evolution of consciousness, nor one machine model of consciousness. We need many of each. I present a way of making progress based on what McCarthy called "the designer stance", using facts about running virtual machines, without which current computers obviously could not work. I suggest the same is true of biological minds, because biological evolution long ago "discovered" a need for something like virtual machinery for self-monitoring and self-extending information-processing systems, and produced far more sophisticated versions than human engineers have so far achieved.