Pub Date : 2024-06-25DOI: 10.1007/s11023-024-09682-0
Luigi Scorzato
In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models—and in particular Deep Neural Network (DNN) models—which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN models and standard scientific modelling and the possible implications of these differences in the assessment of reliability. This article offers several contributions. First, it emphasises the ubiquitous role of model assumptions (both in ML and traditional science) against the illusion of theory-free science. Secondly, model assumptions are analysed from the point of view of their (epistemic) complexity, which is shown to be language-independent. It is argued that the high epistemic complexity of DNN models hinders the estimate of their reliability and also their prospect of long term progress. Some potential ways forward are suggested. Thirdly, this article identifies the close relation between a model’s epistemic complexity and its interpretability, as introduced in the context of responsible AI. This clarifies in which sense—and to what extent—the lack of understanding of a model (black-box problem) impacts its interpretability in a way that is independent of individual skills. It also clarifies how interpretability is a precondition for a plausible assessment of the reliability of any model, which cannot be based on statistical analysis alone. This article focuses on the comparison between traditional scientific models and DNN models. However, Random Forest (RF) and Logistic Regression (LR) models are also briefly considered.
近年来,机器学习(ML)方法的可靠性问题变得越来越重要,对相关不确定性的分析也推动了越来越多的研究。然而,这些研究大多将标准误差分析应用于 ML 模型,特别是深度神经网络(DNN)模型,这与标准科学建模有很大不同。因此,有必要将标准误差分析与更深入的认识论分析结合起来,分析 DNN 模型与标准科学建模之间可能存在的差异,以及这些差异对可靠性评估可能产生的影响。本文有以下几个方面的贡献。首先,它强调了模型假设(在 ML 和传统科学中)无处不在的作用,反对无理论科学的假象。其次,文章从模型假设的(认识论)复杂性角度对其进行了分析,结果表明模型假设与语言无关。有观点认为,DNN 模型在认识论上的高度复杂性阻碍了对其可靠性的估计,也阻碍了其长期发展的前景。本文提出了一些可能的前进方向。第三,本文指出了在负责任人工智能背景下提出的模型认识复杂性与其可解释性之间的密切关系。这阐明了对模型缺乏理解(黑箱问题)在何种意义上以及在何种程度上影响了模型的可解释性,而这种影响与个人技能无关。它还阐明了可解释性如何成为对任何模型的可靠性进行合理评估的先决条件,而这种评估不能仅以统计分析为基础。本文侧重于传统科学模型与 DNN 模型之间的比较。不过,本文也简要介绍了随机森林(RF)和逻辑回归(LR)模型。
{"title":"Reliability and Interpretability in Science and Deep Learning","authors":"Luigi Scorzato","doi":"10.1007/s11023-024-09682-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09682-0","url":null,"abstract":"<p>In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models—and in particular Deep Neural Network (DNN) models—which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN models and standard scientific modelling and the possible implications of these differences in the assessment of reliability. This article offers several contributions. First, it emphasises the ubiquitous role of model assumptions (both in ML and traditional science) against the illusion of theory-free science. Secondly, model assumptions are analysed from the point of view of their (epistemic) complexity, which is shown to be language-independent. It is argued that the high epistemic complexity of DNN models hinders the estimate of their reliability and also their prospect of long term progress. Some potential ways forward are suggested. Thirdly, this article identifies the close relation between a model’s epistemic complexity and its interpretability, as introduced in the context of responsible AI. This clarifies in which sense—and to what extent—the lack of understanding of a model (black-box problem) impacts its interpretability in a way that is independent of individual skills. It also clarifies how interpretability is a precondition for a plausible assessment of the reliability of any model, which cannot be based on statistical analysis alone. This article focuses on the comparison between traditional scientific models and DNN models. However, Random Forest (RF) and Logistic Regression (LR) models are also briefly considered.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"25 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1007/s11023-024-09665-1
Carina Prunkl
Autonomy is a core value that is deeply entrenched in the moral, legal, and political practices of many societies. The development and deployment of artificial intelligence (AI) have raised new questions about AI’s impacts on human autonomy. However, systematic assessments of these impacts are still rare and often held on a case-by-case basis. In this article, I provide a conceptual framework that both ties together seemingly disjoint issues about human autonomy, as well as highlights differences between them. In the first part, I distinguish between distinct concerns that are currently addressed under the umbrella term ‘human autonomy’. In particular, I show how differentiating between autonomy-as-authenticity and autonomy-as-agency helps us to pinpoint separate challenges from AI deployment. Some of these challenges are already well-known (e.g. online manipulation or limitation of freedom), whereas others have received much less attention (e.g. adaptive preference formation). In the second part, I address the different roles AI systems can assume in the context of autonomy. In particular, I differentiate between AI systems taking on agential roles and AI systems being used as tools. I conclude that while there is no ‘silver bullet’ to address concerns about human autonomy, considering its various dimensions can help us to systematically address the associated risks.
{"title":"Human Autonomy at Risk? An Analysis of the Challenges from AI","authors":"Carina Prunkl","doi":"10.1007/s11023-024-09665-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09665-1","url":null,"abstract":"<p>Autonomy is a core value that is deeply entrenched in the moral, legal, and political practices of many societies. The development and deployment of artificial intelligence (AI) have raised new questions about AI’s impacts on human autonomy. However, systematic assessments of these impacts are still rare and often held on a case-by-case basis. In this article, I provide a conceptual framework that both ties together seemingly disjoint issues about human autonomy, as well as highlights differences between them. In the first part, I distinguish between distinct concerns that are currently addressed under the umbrella term ‘human autonomy’. In particular, I show how differentiating between autonomy-as-authenticity and autonomy-as-agency helps us to pinpoint separate challenges from AI deployment. Some of these challenges are already well-known (e.g. online manipulation or limitation of freedom), whereas others have received much less attention (e.g. adaptive preference formation). In the second part, I address the different roles AI systems can assume in the context of autonomy. In particular, I differentiate between AI systems taking on agential roles and AI systems being used as tools. I conclude that while there is no ‘silver bullet’ to address concerns about human autonomy, considering its various dimensions can help us to systematically address the associated risks.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"18 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s11023-024-09686-w
Simon Coghlan
According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. Even if people’s behavior and language regarding human-like machines suggests they believe those machines really have mental states, it is possible that they do not believe that at all. The paper also briefly discusses potential implications of regarding such anthropomorphism as a popular myth. The exercise illuminates the difficult concept of anthropomorphism, helping to clarify possible human relations with or toward machines that increasingly resemble humans and animals.
{"title":"Anthropomorphizing Machines: Reality or Popular Myth?","authors":"Simon Coghlan","doi":"10.1007/s11023-024-09686-w","DOIUrl":"https://doi.org/10.1007/s11023-024-09686-w","url":null,"abstract":"<p>According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. Even if people’s behavior and language regarding human-like machines suggests they believe those machines really have mental states, it is possible that they do not believe that at all. The paper also briefly discusses potential implications of regarding such anthropomorphism as a popular myth. The exercise illuminates the difficult concept of anthropomorphism, helping to clarify possible human relations with or toward machines that increasingly resemble humans and animals.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"79 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s11023-024-09684-y
Daria Szafran, Ruben L. Bach
The increasing use of algorithms in allocating resources and services in both private industry and public administration has sparked discussions about their consequences for inequality and fairness in contemporary societies. Previous research has shown that the use of automated decision-making (ADM) tools in high-stakes scenarios like the legal justice system might lead to adverse societal outcomes, such as systematic discrimination. Scholars have since proposed a variety of metrics to counteract and mitigate biases in ADM processes. While these metrics focus on technical fairness notions, they do not consider how members of the public, as most affected subjects by algorithmic decisions, perceive fairness in ADM. To shed light on subjective fairness perceptions of individuals, this study analyzes individuals’ answers to open-ended fairness questions about hypothetical ADM scenarios that were embedded in the German Internet Panel (Wave 54, July 2021), a probability-based longitudinal online survey. Respondents evaluated the fairness of vignettes describing the use of ADM tools across different contexts. Subsequently, they explained their fairness evaluation providing a textual answer. Using qualitative content analysis, we inductively coded those answers (N = 3697). Based on their individual understanding of fairness, respondents addressed a wide range of aspects related to fairness in ADM which is reflected in the 23 codes we identified. We subsumed those codes under four overarching themes: Human elements in decision-making, Shortcomings of the data, Social impact of AI, and Properties of AI. Our codes and themes provide a valuable resource for understanding which factors influence public fairness perceptions about ADM.
{"title":"“The Human Must Remain the Central Focus”: Subjective Fairness Perceptions in Automated Decision-Making","authors":"Daria Szafran, Ruben L. Bach","doi":"10.1007/s11023-024-09684-y","DOIUrl":"https://doi.org/10.1007/s11023-024-09684-y","url":null,"abstract":"<p>The increasing use of algorithms in allocating resources and services in both private industry and public administration has sparked discussions about their consequences for inequality and fairness in contemporary societies. Previous research has shown that the use of automated decision-making (ADM) tools in high-stakes scenarios like the legal justice system might lead to adverse societal outcomes, such as systematic discrimination. Scholars have since proposed a variety of metrics to counteract and mitigate biases in ADM processes. While these metrics focus on technical fairness notions, they do not consider how members of the public, as most affected subjects by algorithmic decisions, perceive fairness in ADM. To shed light on subjective fairness perceptions of individuals, this study analyzes individuals’ answers to open-ended fairness questions about hypothetical ADM scenarios that were embedded in the German Internet Panel (Wave 54, July 2021), a probability-based longitudinal online survey. Respondents evaluated the fairness of vignettes describing the use of ADM tools across different contexts. Subsequently, they explained their fairness evaluation providing a textual answer. Using qualitative content analysis, we inductively coded those answers (<i>N</i> = 3697). Based on their individual understanding of fairness, respondents addressed a wide range of aspects related to fairness in ADM which is reflected in the 23 codes we identified. We subsumed those codes under four overarching themes: <i>Human elements in decision-making</i>, <i>Shortcomings of the data</i>, <i>Social impact of AI</i>, and <i>Properties of AI</i>. Our codes and themes provide a valuable resource for understanding which factors influence public fairness perceptions about ADM.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"31 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s11023-024-09673-1
Mattia Fumagalli, Roberta Ferrario, Giancarlo Guizzardi
In recent years, the design and production of information systems have seen significant growth. However, these information artefacts often exhibit characteristics that compromise their reliability. This issue appears to stem from the neglect or underestimation of certain crucial aspects in the application of Information Systems Design (ISD). For example, it is frequently difficult to prove when one of these products does not work properly or works incorrectly (falsifiability), their usage is often left to subjective experience and somewhat arbitrary choices (anecdotes), and their functions are often obscure for users as well as designers (explainability). In this paper, we propose an approach that can be used to support the analysis and re-(design) of information systems grounded on a well-known theory of information, namely, teleosemantics. This approach emphasizes the importance of grounding the design and validation process on dependencies between four core components: the producer (or designer), the produced (or used) information system, the consumer (or user), and the design (or use) purpose. We analyze the ambiguities and problems of considering these components separately. We then present some possible ways in which they can be combined through the teleological approach. Also, we debate guidelines to prevent ISD from failing to address critical issues. Finally, we discuss perspectives on applications over real existing information technologies and some implications for explainable AI and ISD.
{"title":"A Teleological Approach to Information Systems Design","authors":"Mattia Fumagalli, Roberta Ferrario, Giancarlo Guizzardi","doi":"10.1007/s11023-024-09673-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09673-1","url":null,"abstract":"<p>In recent years, the design and production of information systems have seen significant growth. However, these <i>information artefacts</i> often exhibit characteristics that compromise their reliability. This issue appears to stem from the neglect or underestimation of certain crucial aspects in the application of <i>Information Systems Design (ISD)</i>. For example, it is frequently difficult to prove when one of these products does not work properly or works incorrectly (<i>falsifiability</i>), their usage is often left to subjective experience and somewhat arbitrary choices (<i>anecdotes</i>), and their functions are often obscure for users as well as designers (<i>explainability</i>). In this paper, we propose an approach that can be used to support the <i>analysis</i> and <i>re-(design)</i> of information systems grounded on a well-known theory of information, namely, <i>teleosemantics</i>. This approach emphasizes the importance of grounding the design and validation process on dependencies between four core components: the <i>producer (or designer)</i>, the <i>produced (or used) information system</i>, the <i>consumer (or user)</i>, and the <i>design (or use) purpose</i>. We analyze the ambiguities and problems of considering these components separately. We then present some possible ways in which they can be combined through the teleological approach. Also, we debate guidelines to prevent ISD from failing to address critical issues. Finally, we discuss perspectives on applications over real existing information technologies and some implications for explainable AI and ISD.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"89 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1007/s11023-024-09676-y
Marie Theresa O’Connor
There is rising skepticism within public discourse about the nature of AI. By skepticism, I mean doubt about what we know about AI. At the same time, some AI speakers are raising the kinds of issues that usually really matter in analysis, such as issues relating to consent and coercion. This essay takes up the question of whether we should analyze a conversation differently because it is between a human and AI instead of between two humans and, if so, why. When is it okay, for instance, to read the phrases “please stop” or “please respect my boundaries” as meaning something other than what those phrases ordinarily mean – and what makes it so? If we ignore denials of consent, or put them in scare quotes, we should have a good reason. This essay focuses on two thinkers, Alan Turing and Stanley Cavell, who in different ways answer the question of whether it matters that a speaker is a machine. It proposes that Cavell’s work on the problem of other minds, in particular Cavell’s story in The Claim of Reason of an automaton whom he imagines meeting in a craftsman’s garden, may be especially helpful in thinking about how to analyze what AI has to say.
{"title":"In the Craftsman’s Garden: AI, Alan Turing, and Stanley Cavell","authors":"Marie Theresa O’Connor","doi":"10.1007/s11023-024-09676-y","DOIUrl":"https://doi.org/10.1007/s11023-024-09676-y","url":null,"abstract":"<p>There is rising skepticism within public discourse about the nature of AI. By skepticism, I mean doubt about what we know about AI. At the same time, some AI speakers are raising the kinds of issues that usually really matter in analysis, such as issues relating to consent and coercion. This essay takes up the question of whether we should analyze a conversation differently because it is between a human and AI instead of between two humans and, if so, why. When is it okay, for instance, to read the phrases “please stop” or “please respect my boundaries” as meaning something other than what those phrases ordinarily mean – and what makes it so? If we ignore denials of consent, or put them in scare quotes, we should have a good reason. This essay focuses on two thinkers, Alan Turing and Stanley Cavell, who in different ways answer the question of whether it matters that a speaker is a machine. It proposes that Cavell’s work on the problem of other minds, in particular Cavell’s story in <i>The Claim of Reason </i>of an automaton whom he imagines meeting in a craftsman’s garden, may be especially helpful in thinking about how to analyze what AI has to say.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"19 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.1007/s11023-024-09674-0
Shannon Vallor, Tillmann Vierkant
The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless is a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of vulnerability between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.
{"title":"Find the Gap: AI, Responsible Agency and Vulnerability","authors":"Shannon Vallor, Tillmann Vierkant","doi":"10.1007/s11023-024-09674-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09674-0","url":null,"abstract":"<p>The <i>responsibility gap</i>, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless <i>is</i> a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of <i>vulnerability</i> between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"39 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141255356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.1007/s11023-024-09662-4
P. N. Johnson-Laird, Ruth M. J. Byrne, Sangeet S. Khemlani
The theory of mental models and its computer implementations have led to crucial experiments showing that no standard logic—the sentential calculus and all logics that include it—can underlie human reasoning. The theory replaces the logical concept of validity (the conclusion is true in all cases in which the premises are true) with necessity (conclusions describe no more than possibilities to which the premises refer). Many inferences are both necessary and valid. But experiments show that individuals make necessary inferences that are invalid, e.g., Few people ate steak or sole; therefore, few people ate steak. Other crucial experiments show that individuals reject inferences that are not necessary but valid, e.g., He had the anesthetic or felt pain, but not both; therefore, he had the anesthetic or felt pain, or both. Nothing in logic can justify the rejection of a valid inference: a denial of its conclusion is inconsistent with its premises, and inconsistencies yield valid inferences of any conclusions whatsoever including the one denied. So inconsistencies are catastrophic in logic. In contrast, the model theory treats all inferences as defeasible (nonmonotonic), and inconsistencies have the null model, which yields only the null model in conjunction with any other premises. So inconsistences are local. Which allows truth values in natural languages to be much richer than those that occur in the semantics of standard logics; and individuals verify assertions on the basis of both facts and possibilities that did not occur.
{"title":"Models of Possibilities Instead of Logic as the Basis of Human Reasoning","authors":"P. N. Johnson-Laird, Ruth M. J. Byrne, Sangeet S. Khemlani","doi":"10.1007/s11023-024-09662-4","DOIUrl":"https://doi.org/10.1007/s11023-024-09662-4","url":null,"abstract":"<p>The theory of mental models and its computer implementations have led to crucial experiments showing that no standard logic—the sentential calculus and all logics that include it—can underlie human reasoning. The theory replaces the logical concept of validity (the conclusion is true in all cases in which the premises are true) with necessity (conclusions describe no more than possibilities to which the premises refer). Many inferences are both necessary and valid. But experiments show that individuals make necessary inferences that are invalid, e.g., <i>Few people ate steak or sole</i>; therefore, <i>few people ate steak</i>. Other crucial experiments show that individuals reject inferences that are not necessary but valid, e.g., <i>He had the anesthetic or felt pain, but not both</i>; therefore, <i>he had the anesthetic or felt pain, or both</i>. Nothing in logic can justify the rejection of a valid inference: a denial of its conclusion is inconsistent with its premises, and inconsistencies yield valid inferences of any conclusions whatsoever including the one denied. So inconsistencies are catastrophic in logic. In contrast, the model theory treats all inferences as defeasible (nonmonotonic), and inconsistencies have the null model, which yields only the null model in conjunction with any other premises. So inconsistences are local. Which allows truth values in natural languages to be much richer than those that occur in the semantics of standard logics; and individuals verify assertions on the basis of both facts and possibilities that did not occur.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"48 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141259848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.1007/s11023-024-09678-w
Luke Kersten
There is a general conception of levels in philosophy which says that the world is arrayed into a hierarchy of levels and that there are different modes of analysis that correspond to each level of this hierarchy, what can be labelled the ‘Hierarchical Correspondence View of Levels” (or HCL). The trouble is that despite its considerable lineage and general status in philosophy of science and metaphysics the HCL has largely escaped analysis in specific domains of inquiry. The goal of this paper is to take up a recent call to domain-specificity by examining the role of the HCL in cognitive science. I argue that the HCL is, in fact, a conception of levels that has been employed in cognitive science and that cognitive scientists should avoid its use where possible. The argument is that the HCL is problematic when applied to cognitive science specifically because it fails to distinguish two important kinds of shifts used when analysing information processing systems: shifts in grain and shifts in analysis. I conclude by proposing a revised version of the HCL which accommodates the distinction.
{"title":"The Hierarchical Correspondence View of Levels: A Case Study in Cognitive Science","authors":"Luke Kersten","doi":"10.1007/s11023-024-09678-w","DOIUrl":"https://doi.org/10.1007/s11023-024-09678-w","url":null,"abstract":"<p>There is a general conception of levels in philosophy which says that the world is arrayed into a hierarchy of levels and that there are different modes of analysis that correspond to each level of this hierarchy, what can be labelled the ‘Hierarchical Correspondence View of Levels” (or HCL). The trouble is that despite its considerable lineage and general status in philosophy of science and metaphysics the HCL has largely escaped analysis in specific domains of inquiry. The goal of this paper is to take up a recent call to domain-specificity by examining the role of the HCL in cognitive science. I argue that the HCL is, in fact, a conception of levels that has been employed in cognitive science and that cognitive scientists should avoid its use where possible. The argument is that the HCL is problematic when applied to cognitive science specifically because it fails to distinguish two important kinds of shifts used when analysing information processing systems: <i>shifts in grain</i> and <i>shifts in analysis</i>. I conclude by proposing a revised version of the HCL which accommodates the distinction.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"193 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-02DOI: 10.1007/s11023-024-09679-9
Beate Krickel
Cognitive ontology has become a popular topic in philosophy, cognitive psychology, and cognitive neuroscience. At its center is the question of which cognitive capacities should be included in the ontology of cognitive psychology and cognitive neuroscience. One common strategy for answering this question is to look at brain structures and determine the cognitive capacities for which they are responsible. Some authors interpret this strategy as a search for neural mechanisms, as understood by the so-called new mechanistic approach. In this article, I will show that this new mechanistic answer is confronted with what I call the triviality problem. A discussion of this problem will show that one cannot derive a meaningful cognitive ontology from neural mechanisms alone. Nonetheless, neural mechanisms play a crucial role in the discovery of a cognitive ontology because they are epistemic proxies for best systematizations.
{"title":"The New Mechanistic Approach and Cognitive Ontology—Or: What Role do (Neural) Mechanisms Play in Cognitive Ontology?","authors":"Beate Krickel","doi":"10.1007/s11023-024-09679-9","DOIUrl":"https://doi.org/10.1007/s11023-024-09679-9","url":null,"abstract":"<p>Cognitive ontology has become a popular topic in philosophy, cognitive psychology, and cognitive neuroscience. At its center is the question of which cognitive capacities should be included in the ontology of cognitive psychology and cognitive neuroscience. One common strategy for answering this question is to look at brain structures and determine the cognitive capacities for which they are responsible. Some authors interpret this strategy as a search for <i>neural mechanisms</i>, as understood by the so-called <i>new mechanistic approach</i>. In this article, I will show that this <i>new mechanistic answer</i> is confronted with what I call the <i>triviality problem</i>. A discussion of this problem will show that one cannot derive a meaningful cognitive ontology from neural mechanisms alone. Nonetheless, neural mechanisms play a crucial role in the discovery of a cognitive ontology because they are <i>epistemic proxies for best systematizations</i>.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"35 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}