首页 > 最新文献

Minds and Machines最新文献

英文 中文
Reliability and Interpretability in Science and Deep Learning 科学与深度学习中的可靠性和可解释性
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-25 DOI: 10.1007/s11023-024-09682-0
Luigi Scorzato

In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models—and in particular Deep Neural Network (DNN) models—which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN models and standard scientific modelling and the possible implications of these differences in the assessment of reliability. This article offers several contributions. First, it emphasises the ubiquitous role of model assumptions (both in ML and traditional science) against the illusion of theory-free science. Secondly, model assumptions are analysed from the point of view of their (epistemic) complexity, which is shown to be language-independent. It is argued that the high epistemic complexity of DNN models hinders the estimate of their reliability and also their prospect of long term progress. Some potential ways forward are suggested. Thirdly, this article identifies the close relation between a model’s epistemic complexity and its interpretability, as introduced in the context of responsible AI. This clarifies in which sense—and to what extent—the lack of understanding of a model (black-box problem) impacts its interpretability in a way that is independent of individual skills. It also clarifies how interpretability is a precondition for a plausible assessment of the reliability of any model, which cannot be based on statistical analysis alone. This article focuses on the comparison between traditional scientific models and DNN models. However, Random Forest (RF) and Logistic Regression (LR) models are also briefly considered.

近年来,机器学习(ML)方法的可靠性问题变得越来越重要,对相关不确定性的分析也推动了越来越多的研究。然而,这些研究大多将标准误差分析应用于 ML 模型,特别是深度神经网络(DNN)模型,这与标准科学建模有很大不同。因此,有必要将标准误差分析与更深入的认识论分析结合起来,分析 DNN 模型与标准科学建模之间可能存在的差异,以及这些差异对可靠性评估可能产生的影响。本文有以下几个方面的贡献。首先,它强调了模型假设(在 ML 和传统科学中)无处不在的作用,反对无理论科学的假象。其次,文章从模型假设的(认识论)复杂性角度对其进行了分析,结果表明模型假设与语言无关。有观点认为,DNN 模型在认识论上的高度复杂性阻碍了对其可靠性的估计,也阻碍了其长期发展的前景。本文提出了一些可能的前进方向。第三,本文指出了在负责任人工智能背景下提出的模型认识复杂性与其可解释性之间的密切关系。这阐明了对模型缺乏理解(黑箱问题)在何种意义上以及在何种程度上影响了模型的可解释性,而这种影响与个人技能无关。它还阐明了可解释性如何成为对任何模型的可靠性进行合理评估的先决条件,而这种评估不能仅以统计分析为基础。本文侧重于传统科学模型与 DNN 模型之间的比较。不过,本文也简要介绍了随机森林(RF)和逻辑回归(LR)模型。
{"title":"Reliability and Interpretability in Science and Deep Learning","authors":"Luigi Scorzato","doi":"10.1007/s11023-024-09682-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09682-0","url":null,"abstract":"<p>In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models—and in particular Deep Neural Network (DNN) models—which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN models and standard scientific modelling and the possible implications of these differences in the assessment of reliability. This article offers several contributions. First, it emphasises the ubiquitous role of model assumptions (both in ML and traditional science) against the illusion of theory-free science. Secondly, model assumptions are analysed from the point of view of their (epistemic) complexity, which is shown to be language-independent. It is argued that the high epistemic complexity of DNN models hinders the estimate of their reliability and also their prospect of long term progress. Some potential ways forward are suggested. Thirdly, this article identifies the close relation between a model’s epistemic complexity and its interpretability, as introduced in the context of responsible AI. This clarifies in which sense—and to what extent—the lack of understanding of a model (black-box problem) impacts its interpretability in a way that is independent of individual skills. It also clarifies how interpretability is a precondition for a plausible assessment of the reliability of any model, which cannot be based on statistical analysis alone. This article focuses on the comparison between traditional scientific models and DNN models. However, Random Forest (RF) and Logistic Regression (LR) models are also briefly considered.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Autonomy at Risk? An Analysis of the Challenges from AI 人类自主面临风险?分析人工智能带来的挑战
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-24 DOI: 10.1007/s11023-024-09665-1
Carina Prunkl

Autonomy is a core value that is deeply entrenched in the moral, legal, and political practices of many societies. The development and deployment of artificial intelligence (AI) have raised new questions about AI’s impacts on human autonomy. However, systematic assessments of these impacts are still rare and often held on a case-by-case basis. In this article, I provide a conceptual framework that both ties together seemingly disjoint issues about human autonomy, as well as highlights differences between them. In the first part, I distinguish between distinct concerns that are currently addressed under the umbrella term ‘human autonomy’. In particular, I show how differentiating between autonomy-as-authenticity and autonomy-as-agency helps us to pinpoint separate challenges from AI deployment. Some of these challenges are already well-known (e.g. online manipulation or limitation of freedom), whereas others have received much less attention (e.g. adaptive preference formation). In the second part, I address the different roles AI systems can assume in the context of autonomy. In particular, I differentiate between AI systems taking on agential roles and AI systems being used as tools. I conclude that while there is no ‘silver bullet’ to address concerns about human autonomy, considering its various dimensions can help us to systematically address the associated risks.

自主是许多社会在道德、法律和政治实践中根深蒂固的核心价值观。人工智能(AI)的发展和应用提出了人工智能对人类自主性影响的新问题。然而,对这些影响的系统性评估仍然很少,而且往往是在个案基础上进行的。在本文中,我提供了一个概念框架,它既能将看似互不相关的人类自主性问题联系在一起,又能突出它们之间的差异。在第一部分中,我区分了目前在 "人类自主权 "这一总括术语下所涉及的不同关注点。特别是,我将展示如何区分作为真实性的自主性和作为代理性的自主性,以帮助我们确定人工智能部署所面临的不同挑战。其中一些挑战已广为人知(如在线操纵或自由限制),而另一些挑战(如自适应偏好形成)则少有人关注。在第二部分中,我将讨论人工智能系统在自主性背景下可能扮演的不同角色。特别是,我区分了人工智能系统扮演的代理角色和人工智能系统作为工具的角色。我的结论是,虽然没有 "灵丹妙药 "可以解决对人类自主性的担忧,但考虑其各个层面可以帮助我们系统地应对相关风险。
{"title":"Human Autonomy at Risk? An Analysis of the Challenges from AI","authors":"Carina Prunkl","doi":"10.1007/s11023-024-09665-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09665-1","url":null,"abstract":"<p>Autonomy is a core value that is deeply entrenched in the moral, legal, and political practices of many societies. The development and deployment of artificial intelligence (AI) have raised new questions about AI’s impacts on human autonomy. However, systematic assessments of these impacts are still rare and often held on a case-by-case basis. In this article, I provide a conceptual framework that both ties together seemingly disjoint issues about human autonomy, as well as highlights differences between them. In the first part, I distinguish between distinct concerns that are currently addressed under the umbrella term ‘human autonomy’. In particular, I show how differentiating between autonomy-as-authenticity and autonomy-as-agency helps us to pinpoint separate challenges from AI deployment. Some of these challenges are already well-known (e.g. online manipulation or limitation of freedom), whereas others have received much less attention (e.g. adaptive preference formation). In the second part, I address the different roles AI systems can assume in the context of autonomy. In particular, I differentiate between AI systems taking on agential roles and AI systems being used as tools. I conclude that while there is no ‘silver bullet’ to address concerns about human autonomy, considering its various dimensions can help us to systematically address the associated risks.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anthropomorphizing Machines: Reality or Popular Myth? 机器拟人化:现实还是大众神话?
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-20 DOI: 10.1007/s11023-024-09686-w
Simon Coghlan

According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. Even if people’s behavior and language regarding human-like machines suggests they believe those machines really have mental states, it is possible that they do not believe that at all. The paper also briefly discusses potential implications of regarding such anthropomorphism as a popular myth. The exercise illuminates the difficult concept of anthropomorphism, helping to clarify possible human relations with or toward machines that increasingly resemble humans and animals.

一种普遍的观点认为,人们经常将某些机器人、计算机和人工智能系统等机器拟人化,错误地将心理状态归因于它们。根据这种观点,人们几乎不可抗拒地相信--即使只是下意识地相信--具有某些类似人类特征的机器确实具有悲、喜、欲、痛、乐和苦恼等现象或主观体验,尽管它们缺乏这些感受。本文通过对支持这一观点的常见论据进行批判,并提出另一种解释,对这一观点提出质疑。即使人们关于类人机器的行为和语言表明他们相信这些机器真的有精神状态,但也有可能他们根本不相信这一点。本文还简要讨论了将这种拟人化视为流行神话的潜在影响。这项研究揭示了拟人化这一难以理解的概念,有助于澄清人类与越来越像人类和动物的机器之间可能存在的关系。
{"title":"Anthropomorphizing Machines: Reality or Popular Myth?","authors":"Simon Coghlan","doi":"10.1007/s11023-024-09686-w","DOIUrl":"https://doi.org/10.1007/s11023-024-09686-w","url":null,"abstract":"<p>According to a widespread view, people often anthropomorphize machines such as certain robots and computer and AI systems by erroneously attributing mental states to them. On this view, people almost irresistibly believe, even if only subconsciously, that machines with certain human-like features really have phenomenal or subjective experiences like sadness, happiness, desire, pain, joy, and distress, even though they lack such feelings. This paper questions this view by critiquing common arguments used to support it and by suggesting an alternative explanation. Even if people’s behavior and language regarding human-like machines suggests they believe those machines really have mental states, it is possible that they do not believe that at all. The paper also briefly discusses potential implications of regarding such anthropomorphism as a popular myth. The exercise illuminates the difficult concept of anthropomorphism, helping to clarify possible human relations with or toward machines that increasingly resemble humans and animals.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“The Human Must Remain the Central Focus”: Subjective Fairness Perceptions in Automated Decision-Making "人必须仍然是中心焦点":自动决策中的主观公平感
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-19 DOI: 10.1007/s11023-024-09684-y
Daria Szafran, Ruben L. Bach

The increasing use of algorithms in allocating resources and services in both private industry and public administration has sparked discussions about their consequences for inequality and fairness in contemporary societies. Previous research has shown that the use of automated decision-making (ADM) tools in high-stakes scenarios like the legal justice system might lead to adverse societal outcomes, such as systematic discrimination. Scholars have since proposed a variety of metrics to counteract and mitigate biases in ADM processes. While these metrics focus on technical fairness notions, they do not consider how members of the public, as most affected subjects by algorithmic decisions, perceive fairness in ADM. To shed light on subjective fairness perceptions of individuals, this study analyzes individuals’ answers to open-ended fairness questions about hypothetical ADM scenarios that were embedded in the German Internet Panel (Wave 54, July 2021), a probability-based longitudinal online survey. Respondents evaluated the fairness of vignettes describing the use of ADM tools across different contexts. Subsequently, they explained their fairness evaluation providing a textual answer. Using qualitative content analysis, we inductively coded those answers (N = 3697). Based on their individual understanding of fairness, respondents addressed a wide range of aspects related to fairness in ADM which is reflected in the 23 codes we identified. We subsumed those codes under four overarching themes: Human elements in decision-making, Shortcomings of the data, Social impact of AI, and Properties of AI. Our codes and themes provide a valuable resource for understanding which factors influence public fairness perceptions about ADM.

算法在私营企业和公共管理部门分配资源和服务中的使用越来越多,这引发了人们对算法在当代社会中对不平等和公平的影响的讨论。以往的研究表明,在法律司法系统等高风险场景中使用自动决策(ADM)工具可能会导致不利的社会结果,如系统性歧视。此后,学者们提出了各种衡量标准,以抵消和减轻 ADM 流程中的偏见。虽然这些衡量标准侧重于技术上的公平性概念,但它们并没有考虑到作为受算法决策影响最大的主体--公众是如何看待 ADM 中的公平性的。为了揭示个人的主观公平性认知,本研究分析了个人对有关假设 ADM 情景的开放式公平性问题的回答,这些问题被嵌入到德国互联网面板(Wave 54,2021 年 7 月)中,这是一项基于概率的纵向在线调查。受访者对描述在不同情境下使用 ADM 工具的小故事的公平性进行了评估。随后,他们通过文字回答来解释其公平性评价。通过定性内容分析,我们对这些答案进行了归纳编码(N = 3697)。根据受访者个人对公平性的理解,他们对 ADM 中与公平性相关的广泛方面进行了回答,这反映在我们确定的 23 个代码中。我们将这些代码归纳为四大主题:决策中的人为因素、数据的缺陷、人工智能的社会影响以及人工智能的特性。我们的代码和主题为了解哪些因素会影响公众对人工智能公平性的看法提供了宝贵的资源。
{"title":"“The Human Must Remain the Central Focus”: Subjective Fairness Perceptions in Automated Decision-Making","authors":"Daria Szafran, Ruben L. Bach","doi":"10.1007/s11023-024-09684-y","DOIUrl":"https://doi.org/10.1007/s11023-024-09684-y","url":null,"abstract":"<p>The increasing use of algorithms in allocating resources and services in both private industry and public administration has sparked discussions about their consequences for inequality and fairness in contemporary societies. Previous research has shown that the use of automated decision-making (ADM) tools in high-stakes scenarios like the legal justice system might lead to adverse societal outcomes, such as systematic discrimination. Scholars have since proposed a variety of metrics to counteract and mitigate biases in ADM processes. While these metrics focus on technical fairness notions, they do not consider how members of the public, as most affected subjects by algorithmic decisions, perceive fairness in ADM. To shed light on subjective fairness perceptions of individuals, this study analyzes individuals’ answers to open-ended fairness questions about hypothetical ADM scenarios that were embedded in the German Internet Panel (Wave 54, July 2021), a probability-based longitudinal online survey. Respondents evaluated the fairness of vignettes describing the use of ADM tools across different contexts. Subsequently, they explained their fairness evaluation providing a textual answer. Using qualitative content analysis, we inductively coded those answers (<i>N</i> = 3697). Based on their individual understanding of fairness, respondents addressed a wide range of aspects related to fairness in ADM which is reflected in the 23 codes we identified. We subsumed those codes under four overarching themes: <i>Human elements in decision-making</i>, <i>Shortcomings of the data</i>, <i>Social impact of AI</i>, and <i>Properties of AI</i>. Our codes and themes provide a valuable resource for understanding which factors influence public fairness perceptions about ADM.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Teleological Approach to Information Systems Design 信息系统设计的目的论方法
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-18 DOI: 10.1007/s11023-024-09673-1
Mattia Fumagalli, Roberta Ferrario, Giancarlo Guizzardi

In recent years, the design and production of information systems have seen significant growth. However, these information artefacts often exhibit characteristics that compromise their reliability. This issue appears to stem from the neglect or underestimation of certain crucial aspects in the application of Information Systems Design (ISD). For example, it is frequently difficult to prove when one of these products does not work properly or works incorrectly (falsifiability), their usage is often left to subjective experience and somewhat arbitrary choices (anecdotes), and their functions are often obscure for users as well as designers (explainability). In this paper, we propose an approach that can be used to support the analysis and re-(design) of information systems grounded on a well-known theory of information, namely, teleosemantics. This approach emphasizes the importance of grounding the design and validation process on dependencies between four core components: the producer (or designer), the produced (or used) information system, the consumer (or user), and the design (or use) purpose. We analyze the ambiguities and problems of considering these components separately. We then present some possible ways in which they can be combined through the teleological approach. Also, we debate guidelines to prevent ISD from failing to address critical issues. Finally, we discuss perspectives on applications over real existing information technologies and some implications for explainable AI and ISD.

近年来,信息系统的设计和生产有了显著增长。然而,这些信息人工制品往往表现出一些有损其可靠性的特征。这个问题似乎源于信息系统设计(ISD)应用中对某些关键方面的忽视或低估。例如,当这些产品不能正常工作或工作不正确时,往往很难证明(可证伪性);它们的使用往往取决于主观经验和任意选择(趣闻轶事);它们的功能对于用户和设计者来说往往模糊不清(可解释性)。在本文中,我们提出了一种以著名的信息理论(即远程信息理论)为基础的方法,可用于支持信息系统的分析和重新(设计)。这种方法强调设计和验证过程必须建立在四个核心组成部分之间的依赖关系上:生产者(或设计者)、生产(或使用)的信息系统、消费者(或用户)和设计(或使用)目的。我们分析了单独考虑这些组成部分的模糊性和问题。然后,我们介绍了通过目的论方法将它们结合起来的一些可能方式。此外,我们还讨论了防止综合服务部未能解决关键问题的指导原则。最后,我们讨论了现有信息技术的应用前景,以及对可解释人工智能和基础设施服务部门的一些影响。
{"title":"A Teleological Approach to Information Systems Design","authors":"Mattia Fumagalli, Roberta Ferrario, Giancarlo Guizzardi","doi":"10.1007/s11023-024-09673-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09673-1","url":null,"abstract":"<p>In recent years, the design and production of information systems have seen significant growth. However, these <i>information artefacts</i> often exhibit characteristics that compromise their reliability. This issue appears to stem from the neglect or underestimation of certain crucial aspects in the application of <i>Information Systems Design (ISD)</i>. For example, it is frequently difficult to prove when one of these products does not work properly or works incorrectly (<i>falsifiability</i>), their usage is often left to subjective experience and somewhat arbitrary choices (<i>anecdotes</i>), and their functions are often obscure for users as well as designers (<i>explainability</i>). In this paper, we propose an approach that can be used to support the <i>analysis</i> and <i>re-(design)</i> of information systems grounded on a well-known theory of information, namely, <i>teleosemantics</i>. This approach emphasizes the importance of grounding the design and validation process on dependencies between four core components: the <i>producer (or designer)</i>, the <i>produced (or used) information system</i>, the <i>consumer (or user)</i>, and the <i>design (or use) purpose</i>. We analyze the ambiguities and problems of considering these components separately. We then present some possible ways in which they can be combined through the teleological approach. Also, we debate guidelines to prevent ISD from failing to address critical issues. Finally, we discuss perspectives on applications over real existing information technologies and some implications for explainable AI and ISD.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In the Craftsman’s Garden: AI, Alan Turing, and Stanley Cavell 在工匠的花园里人工智能、阿兰-图灵和斯坦利-卡维尔
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-13 DOI: 10.1007/s11023-024-09676-y
Marie Theresa O’Connor

There is rising skepticism within public discourse about the nature of AI. By skepticism, I mean doubt about what we know about AI. At the same time, some AI speakers are raising the kinds of issues that usually really matter in analysis, such as issues relating to consent and coercion. This essay takes up the question of whether we should analyze a conversation differently because it is between a human and AI instead of between two humans and, if so, why. When is it okay, for instance, to read the phrases “please stop” or “please respect my boundaries” as meaning something other than what those phrases ordinarily mean – and what makes it so? If we ignore denials of consent, or put them in scare quotes, we should have a good reason. This essay focuses on two thinkers, Alan Turing and Stanley Cavell, who in different ways answer the question of whether it matters that a speaker is a machine. It proposes that Cavell’s work on the problem of other minds, in particular Cavell’s story in The Claim of Reason of an automaton whom he imagines meeting in a craftsman’s garden, may be especially helpful in thinking about how to analyze what AI has to say.

在公众讨论中,对人工智能本质的怀疑日益高涨。我所说的怀疑论,是指对我们所了解的人工智能的怀疑。与此同时,一些人工智能演讲者提出了通常在分析中真正重要的问题,例如与同意和胁迫有关的问题。本文探讨的问题是,我们是否应该因为对话是在人类与人工智能之间而不是两个人类之间进行的,就对对话进行不同的分析?例如,什么时候可以将 "请停止 "或 "请尊重我的界限 "这两个短语理解为不同于这些短语通常含义的意思?如果我们无视拒绝同意的说法,或者把它们放在吓人的引号里,我们应该有充分的理由。这篇文章聚焦于两位思想家,艾伦-图灵和斯坦利-卡维尔,他们以不同的方式回答了说话者是机器是否重要的问题。文章提出,卡维尔关于其他思维问题的研究,尤其是卡维尔在《理性的诉求》中讲述的他想象中在工匠的花园里遇到的自动机的故事,可能特别有助于思考如何分析人工智能要说的话。
{"title":"In the Craftsman’s Garden: AI, Alan Turing, and Stanley Cavell","authors":"Marie Theresa O’Connor","doi":"10.1007/s11023-024-09676-y","DOIUrl":"https://doi.org/10.1007/s11023-024-09676-y","url":null,"abstract":"<p>There is rising skepticism within public discourse about the nature of AI. By skepticism, I mean doubt about what we know about AI. At the same time, some AI speakers are raising the kinds of issues that usually really matter in analysis, such as issues relating to consent and coercion. This essay takes up the question of whether we should analyze a conversation differently because it is between a human and AI instead of between two humans and, if so, why. When is it okay, for instance, to read the phrases “please stop” or “please respect my boundaries” as meaning something other than what those phrases ordinarily mean – and what makes it so? If we ignore denials of consent, or put them in scare quotes, we should have a good reason. This essay focuses on two thinkers, Alan Turing and Stanley Cavell, who in different ways answer the question of whether it matters that a speaker is a machine. It proposes that Cavell’s work on the problem of other minds, in particular Cavell’s story in <i>The Claim of Reason </i>of an automaton whom he imagines meeting in a craftsman’s garden, may be especially helpful in thinking about how to analyze what AI has to say.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A sociotechnical system perspective on AI 从社会技术系统的角度看人工智能
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-06-12 DOI: 10.1007/s11023-024-09680-2
O. Kudina, Ibo van de Poel
{"title":"A sociotechnical system perspective on AI","authors":"O. Kudina, Ibo van de Poel","doi":"10.1007/s11023-024-09680-2","DOIUrl":"https://doi.org/10.1007/s11023-024-09680-2","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141354251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Find the Gap: AI, Responsible Agency and Vulnerability 寻找差距:人工智能、负责任的机构和脆弱性
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-06-05 DOI: 10.1007/s11023-024-09674-0
Shannon Vallor, Tillmann Vierkant

The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless is a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of vulnerability between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.

责任差距通常被描述为人工智能和自主系统(AI/AS)的有效治理和信任所面临的核心挑战,传统上与道德责任的认识和/或控制条件失效有关:即知道我们在做什么并对这种行为进行有效控制的能力。然而,在理解人工智能/自动系统带来的责任挑战时,这两个条件只是一个障眼法,因为认知科学的证据表明,人类个体在这两个条件方面面临着非常相似的责任挑战。虽然认识上的不透明性和行为控制的减弱并不是人工智能/辅助系统技术所独有的问题(尽管这些问题会因此而加剧),但我们可以从哲学家们最近如何修订传统的道德责任概念,以应对认知科学对负责任的人类机构提出的这些挑战中,为人工智能/辅助系统的开发和治理汲取重要的经验教训。由此产生的工具主义责任观强调机构培养的前瞻性和灵活性作用,为将人工智能/AS 纳入健康的道德生态带来了巨大希望。我们注意到,在人工智能/AS 的责任方面还存在着一个有待广泛研究和解决的空白,这个空白的基础是人类代理与社会技术系统(如人工智能/AS)之间脆弱性的关系不对称。在本文的结论中,我们指出,对这一脆弱性差距的关注必须为未来构建值得信赖的人工智能/辅助系统和维护负责任的人类机构的条件提供信息和帮助。
{"title":"Find the Gap: AI, Responsible Agency and Vulnerability","authors":"Shannon Vallor, Tillmann Vierkant","doi":"10.1007/s11023-024-09674-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09674-0","url":null,"abstract":"<p>The <i>responsibility gap</i>, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual humans face very similar responsibility challenges with regard to these two conditions. While the problems of epistemic opacity and attenuated behaviour control are not unique to AI/AS technologies (though they can be exacerbated by them), we show that we can learn important lessons for AI/AS development and governance from how philosophers have recently revised the traditional concept of moral responsibility in response to these challenges to responsible human agency from the cognitive sciences. The resulting instrumentalist views of responsibility, which emphasize the forward-looking and flexible role of agency cultivation, hold considerable promise for integrating AI/AS into a healthy moral ecology. We note that there nevertheless <i>is</i> a gap in AI/AS responsibility that has yet to be extensively studied and addressed, one grounded in a relational asymmetry of <i>vulnerability</i> between human agents and sociotechnical systems like AI/AS. In the conclusion of this paper we note that attention to this vulnerability gap must inform and enable future attempts to construct trustworthy AI/AS systems and preserve the conditions for responsible human agency.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141255356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Models of Possibilities Instead of Logic as the Basis of Human Reasoning 可能性模型而非逻辑是人类推理的基础
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-06-04 DOI: 10.1007/s11023-024-09662-4
P. N. Johnson-Laird, Ruth M. J. Byrne, Sangeet S. Khemlani

The theory of mental models and its computer implementations have led to crucial experiments showing that no standard logic—the sentential calculus and all logics that include it—can underlie human reasoning. The theory replaces the logical concept of validity (the conclusion is true in all cases in which the premises are true) with necessity (conclusions describe no more than possibilities to which the premises refer). Many inferences are both necessary and valid. But experiments show that individuals make necessary inferences that are invalid, e.g., Few people ate steak or sole; therefore, few people ate steak. Other crucial experiments show that individuals reject inferences that are not necessary but valid, e.g., He had the anesthetic or felt pain, but not both; therefore, he had the anesthetic or felt pain, or both. Nothing in logic can justify the rejection of a valid inference: a denial of its conclusion is inconsistent with its premises, and inconsistencies yield valid inferences of any conclusions whatsoever including the one denied. So inconsistencies are catastrophic in logic. In contrast, the model theory treats all inferences as defeasible (nonmonotonic), and inconsistencies have the null model, which yields only the null model in conjunction with any other premises. So inconsistences are local. Which allows truth values in natural languages to be much richer than those that occur in the semantics of standard logics; and individuals verify assertions on the basis of both facts and possibilities that did not occur.

心智模型理论及其计算机实现导致了一些重要的实验,这些实验表明,没有任何标准逻辑--句法微积分和包括它在内的所有逻辑--可以作为人类推理的基础。该理论用必然性(结论描述的只是前提所指的可能性)取代了有效性(结论在前提为真的所有情况下都是真的)这一逻辑概念。许多推论既必然又有效。但实验表明,个体做出的必然性推论是无效的,例如,很少有人吃牛排或鳎目鱼;因此,很少有人吃牛排。其他一些重要的实验表明,个体拒绝接受非必要但有效的推论,例如:他打了麻药或感觉到了疼痛,但两者都没有;因此,他打了麻药或感觉到了疼痛,或两者都有。逻辑学中没有任何东西能证明拒绝有效推论是合理的:对结论的否定与其前提不一致,而不一致会产生任何结论的有效推论,包括被否定的结论。因此,不一致在逻辑学中是灾难性的。与此相反,模型理论将所有推论都视为可失败的(非单调性),而不一致则具有空模型,结合任何其他前提都只能得到空模型。因此,不一致是局部的。这使得自然语言中的真值比标准逻辑语义中的真值丰富得多;个人可以根据事实和未发生的可能性来验证断言。
{"title":"Models of Possibilities Instead of Logic as the Basis of Human Reasoning","authors":"P. N. Johnson-Laird, Ruth M. J. Byrne, Sangeet S. Khemlani","doi":"10.1007/s11023-024-09662-4","DOIUrl":"https://doi.org/10.1007/s11023-024-09662-4","url":null,"abstract":"<p>The theory of mental models and its computer implementations have led to crucial experiments showing that no standard logic—the sentential calculus and all logics that include it—can underlie human reasoning. The theory replaces the logical concept of validity (the conclusion is true in all cases in which the premises are true) with necessity (conclusions describe no more than possibilities to which the premises refer). Many inferences are both necessary and valid. But experiments show that individuals make necessary inferences that are invalid, e.g., <i>Few people ate steak or sole</i>; therefore, <i>few people ate steak</i>. Other crucial experiments show that individuals reject inferences that are not necessary but valid, e.g., <i>He had the anesthetic or felt pain, but not both</i>; therefore, <i>he had the anesthetic or felt pain, or both</i>. Nothing in logic can justify the rejection of a valid inference: a denial of its conclusion is inconsistent with its premises, and inconsistencies yield valid inferences of any conclusions whatsoever including the one denied. So inconsistencies are catastrophic in logic. In contrast, the model theory treats all inferences as defeasible (nonmonotonic), and inconsistencies have the null model, which yields only the null model in conjunction with any other premises. So inconsistences are local. Which allows truth values in natural languages to be much richer than those that occur in the semantics of standard logics; and individuals verify assertions on the basis of both facts and possibilities that did not occur.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141259848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Hierarchical Correspondence View of Levels: A Case Study in Cognitive Science 层次对应观:认知科学案例研究
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-06-03 DOI: 10.1007/s11023-024-09678-w
Luke Kersten

There is a general conception of levels in philosophy which says that the world is arrayed into a hierarchy of levels and that there are different modes of analysis that correspond to each level of this hierarchy, what can be labelled the ‘Hierarchical Correspondence View of Levels” (or HCL). The trouble is that despite its considerable lineage and general status in philosophy of science and metaphysics the HCL has largely escaped analysis in specific domains of inquiry. The goal of this paper is to take up a recent call to domain-specificity by examining the role of the HCL in cognitive science. I argue that the HCL is, in fact, a conception of levels that has been employed in cognitive science and that cognitive scientists should avoid its use where possible. The argument is that the HCL is problematic when applied to cognitive science specifically because it fails to distinguish two important kinds of shifts used when analysing information processing systems: shifts in grain and shifts in analysis. I conclude by proposing a revised version of the HCL which accommodates the distinction.

哲学中有一种关于层次的一般概念,即世界是由一个个层次组成的,而每一个层次都对应着不同的分析模式,这就是 "层次对应观"(或 HCL)。问题在于,尽管 "层次对应观 "在科学哲学和形而上学中具有相当的渊源和普遍地位,但它在具体的研究领域却基本上没有得到分析。本文的目的是通过研究 HCL 在认知科学中的作用,响应最近对特定领域的呼吁。我认为,HCL 实际上是认知科学中使用的一种水平概念,认知科学家应尽可能避免使用它。我的论点是,HCL 在认知科学中的应用是有问题的,特别是因为它未能区分在分析信息处理系统时使用的两种重要的转变:粒度的转变和分析的转变。最后,我提出了一个修订版的 HCL,以适应这种区分。
{"title":"The Hierarchical Correspondence View of Levels: A Case Study in Cognitive Science","authors":"Luke Kersten","doi":"10.1007/s11023-024-09678-w","DOIUrl":"https://doi.org/10.1007/s11023-024-09678-w","url":null,"abstract":"<p>There is a general conception of levels in philosophy which says that the world is arrayed into a hierarchy of levels and that there are different modes of analysis that correspond to each level of this hierarchy, what can be labelled the ‘Hierarchical Correspondence View of Levels” (or HCL). The trouble is that despite its considerable lineage and general status in philosophy of science and metaphysics the HCL has largely escaped analysis in specific domains of inquiry. The goal of this paper is to take up a recent call to domain-specificity by examining the role of the HCL in cognitive science. I argue that the HCL is, in fact, a conception of levels that has been employed in cognitive science and that cognitive scientists should avoid its use where possible. The argument is that the HCL is problematic when applied to cognitive science specifically because it fails to distinguish two important kinds of shifts used when analysing information processing systems: <i>shifts in grain</i> and <i>shifts in analysis</i>. I conclude by proposing a revised version of the HCL which accommodates the distinction.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141254981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Minds and Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1