首页 > 最新文献

Minds and Machines最新文献

英文 中文
The New Mechanistic Approach and Cognitive Ontology—Or: What Role do (Neural) Mechanisms Play in Cognitive Ontology? 新机制方法与认知本体论--或者说:(神经)机制在认知本体论中扮演什么角色?
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-06-02 DOI: 10.1007/s11023-024-09679-9
Beate Krickel

Cognitive ontology has become a popular topic in philosophy, cognitive psychology, and cognitive neuroscience. At its center is the question of which cognitive capacities should be included in the ontology of cognitive psychology and cognitive neuroscience. One common strategy for answering this question is to look at brain structures and determine the cognitive capacities for which they are responsible. Some authors interpret this strategy as a search for neural mechanisms, as understood by the so-called new mechanistic approach. In this article, I will show that this new mechanistic answer is confronted with what I call the triviality problem. A discussion of this problem will show that one cannot derive a meaningful cognitive ontology from neural mechanisms alone. Nonetheless, neural mechanisms play a crucial role in the discovery of a cognitive ontology because they are epistemic proxies for best systematizations.

认知本体论已成为哲学、认知心理学和认知神经科学领域的热门话题。其核心问题是,认知心理学和认知神经科学的本体论应该包括哪些认知能力。回答这个问题的一个常见策略是观察大脑结构,并确定它们负责的认知能力。一些作者将这一策略解释为寻找神经机制,正如所谓的新机械论方法所理解的那样。在本文中,我将说明这种新的机械论答案面临着我所说的琐碎性问题。对这一问题的讨论将表明,我们无法仅从神经机制中推导出有意义的认知本体论。尽管如此,神经机制在发现认知本体论的过程中发挥着至关重要的作用,因为它们是最佳系统化的认识论代表。
{"title":"The New Mechanistic Approach and Cognitive Ontology—Or: What Role do (Neural) Mechanisms Play in Cognitive Ontology?","authors":"Beate Krickel","doi":"10.1007/s11023-024-09679-9","DOIUrl":"https://doi.org/10.1007/s11023-024-09679-9","url":null,"abstract":"<p>Cognitive ontology has become a popular topic in philosophy, cognitive psychology, and cognitive neuroscience. At its center is the question of which cognitive capacities should be included in the ontology of cognitive psychology and cognitive neuroscience. One common strategy for answering this question is to look at brain structures and determine the cognitive capacities for which they are responsible. Some authors interpret this strategy as a search for <i>neural mechanisms</i>, as understood by the so-called <i>new mechanistic approach</i>. In this article, I will show that this <i>new mechanistic answer</i> is confronted with what I call the <i>triviality problem</i>. A discussion of this problem will show that one cannot derive a meaningful cognitive ontology from neural mechanisms alone. Nonetheless, neural mechanisms play a crucial role in the discovery of a cognitive ontology because they are <i>epistemic proxies for best systematizations</i>.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tool-Augmented Human Creativity 工具辅助人类创造力
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-05-26 DOI: 10.1007/s11023-024-09677-x
Kjell Jørgen Hole

Creativity is the hallmark of human intelligence. Roli et al. (Frontiers in Ecology and Evolution 9:806283, 2022) state that algorithms cannot achieve human creativity. This paper analyzes cooperation between humans and intelligent algorithmic tools to compensate for algorithms’ limited creativity. The intelligent tools have functionality from the neocortex, the brain’s center for learning, reasoning, planning, and language. The analysis provides four key insights about human-tool cooperation to solve challenging problems. First, no neocortex-based tool without feelings can achieve human creativity. Second, an interactive tool exploring users’ feeling-guided creativity enhances the ability to solve complex problems. Third, user-led abductive reasoning incorporating human creativity is essential to human-tool cooperative problem-solving. Fourth, although stakeholders must take moral responsibility for the adverse impact of tool answers, it is still essential to teach tools moral values to generate trustworthy answers. The analysis concludes that the scientific community should create neocortex-based tools to augment human creativity and enhance problem-solving rather than creating autonomous algorithmic entities with independent but less creative problem-solving.

创造力是人类智慧的标志。Roli 等人(Frontiers in Ecology and Evolution 9:806283, 2022)指出,算法无法实现人类的创造力。本文分析了人类与智能算法工具之间的合作,以弥补算法有限的创造力。智能工具具有来自大脑新皮层的功能,新皮层是大脑的学习、推理、规划和语言中心。这项分析为人类与工具合作解决挑战性问题提供了四个重要启示。首先,没有感情的基于新皮质的工具无法实现人类的创造力。其次,探索用户以感觉为导向的创造力的互动工具可以提高解决复杂问题的能力。第三,用户主导的归纳推理包含了人类的创造力,对人类工具合作解决问题至关重要。第四,尽管利益相关者必须对工具答案的负面影响承担道德责任,但仍有必要向工具传授道德价值观,以生成值得信赖的答案。分析得出的结论是,科学界应该创造基于新皮质的工具,以增强人类的创造力,提高解决问题的能力,而不是创造独立但创造力较弱的自主算法实体来解决问题。
{"title":"Tool-Augmented Human Creativity","authors":"Kjell Jørgen Hole","doi":"10.1007/s11023-024-09677-x","DOIUrl":"https://doi.org/10.1007/s11023-024-09677-x","url":null,"abstract":"<p>Creativity is the hallmark of human intelligence. Roli et al. (Frontiers in Ecology and Evolution 9:806283, 2022) state that algorithms cannot achieve human creativity. This paper analyzes cooperation between humans and intelligent algorithmic tools to compensate for algorithms’ limited creativity. The intelligent tools have functionality from the neocortex, the brain’s center for learning, reasoning, planning, and language. The analysis provides four key insights about human-tool cooperation to solve challenging problems. First, no neocortex-based tool without feelings can achieve human creativity. Second, an interactive tool exploring users’ feeling-guided creativity enhances the ability to solve complex problems. Third, user-led abductive reasoning incorporating human creativity is essential to human-tool cooperative problem-solving. Fourth, although stakeholders must take moral responsibility for the adverse impact of tool answers, it is still essential to teach tools moral values to generate trustworthy answers. The analysis concludes that the scientific community should create neocortex-based tools to augment human creativity and enhance problem-solving rather than creating autonomous algorithmic entities with independent but less creative problem-solving.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Black-Box Testing and Auditing of Bias in ADM Systems ADM 系统中的黑盒测试和偏差审计
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-05-25 DOI: 10.1007/s11023-024-09666-0
Tobias D. Krafft, Marc P. Hauer, Katharina Zweig

For years, the number of opaque algorithmic decision-making systems (ADM systems) with a large impact on society has been increasing: e.g., systems that compute decisions about future recidivism of criminals, credit worthiness, or the many small decision computing systems within social networks that create rankings, provide recommendations, or filter content. Concerns that such a system makes biased decisions can be difficult to investigate: be it by people affected, NGOs, stakeholders, governmental testing and auditing authorities, or other external parties. Scientific testing and auditing literature rarely focuses on the specific needs for such investigations and suffers from ambiguous terminologies. With this paper, we aim to support this investigation process by collecting, explaining, and categorizing methods of testing for bias, which are applicable to black-box systems, given that inputs and respective outputs can be observed. For this purpose, we provide a taxonomy that can be used to select suitable test methods adapted to the respective situation. This taxonomy takes multiple aspects into account, for example the effort to implement a given test method, its technical requirement (such as the need of ground truth) and social constraints of the investigation, e.g., the protection of business secrets. Furthermore, we analyze which test method can be used in the context of which black box audit concept. It turns out that various factors, such as the type of black box audit or the lack of an oracle, may limit the selection of applicable tests. With the help of this paper, people or organizations who want to test an ADM system for bias can identify which test methods and auditing concepts are applicable and what implications they entail.

多年来,对社会产生巨大影响的不透明算法决策系统(ADM 系统)的数量一直在增加:例如,计算决定罪犯未来累犯、信用价值的系统,或社交网络中创建排名、提供推荐或过滤内容的许多小型决策计算系统。无论是受影响的人、非政府组织、利益相关者、政府测试和审计机构,还是其他外部方,都很难对此类系统做出有偏见决策的担忧进行调查。科学测试和审核文献很少关注此类调查的具体需求,而且术语含糊不清。本文旨在通过收集、解释和分类适用于黑盒系统的偏差测试方法,为调查过程提供支持。为此,我们提供了一个分类法,可用于选择适合各自情况的测试方法。该分类法考虑了多个方面,例如实施特定测试方法的工作量、技术要求(如对地面实况的需求)和调查的社会限制(如商业机密的保护)。此外,我们还分析了哪种测试方法可用于哪种黑盒审计概念。结果发现,黑盒审计的类型或缺乏甲骨文等各种因素可能会限制适用测试的选择。在本文的帮助下,想要测试 ADM 系统是否存在偏差的人员或组织可以确定哪些测试方法和审计概念是适用的,以及它们会带来哪些影响。
{"title":"Black-Box Testing and Auditing of Bias in ADM Systems","authors":"Tobias D. Krafft, Marc P. Hauer, Katharina Zweig","doi":"10.1007/s11023-024-09666-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09666-0","url":null,"abstract":"<p>For years, the number of opaque algorithmic decision-making systems (ADM systems) with a large impact on society has been increasing: e.g., systems that compute decisions about future recidivism of criminals, credit worthiness, or the many small decision computing systems within social networks that create rankings, provide recommendations, or filter content. Concerns that such a system makes biased decisions can be difficult to investigate: be it by people affected, NGOs, stakeholders, governmental testing and auditing authorities, or other external parties. Scientific testing and auditing literature rarely focuses on the specific needs for such investigations and suffers from ambiguous terminologies. With this paper, we aim to support this investigation process by collecting, explaining, and categorizing methods of testing for bias, which are applicable to black-box systems, given that inputs and respective outputs can be observed. For this purpose, we provide a taxonomy that can be used to select suitable test methods adapted to the respective situation. This taxonomy takes multiple aspects into account, for example the effort to implement a given test method, its technical requirement (such as the need of ground truth) and social constraints of the investigation, e.g., the protection of business secrets. Furthermore, we analyze which test method can be used in the context of which black box audit concept. It turns out that various factors, such as the type of black box audit or the lack of an oracle, may limit the selection of applicable tests. With the help of this paper, people or organizations who want to test an ADM system for bias can identify which test methods and auditing concepts are applicable and what implications they entail.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflective Artificial Intelligence 反思型人工智能
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-05-18 DOI: 10.1007/s11023-024-09664-2
Peter R. Lewis, Ştefan Sarkadi

As artificial intelligence (AI) technology advances, we increasingly delegate mental tasks to machines. However, today’s AI systems usually do these tasks with an unusual imbalance of insight and understanding: new, deeper insights are present, yet many important qualities that a human mind would have previously brought to the activity are utterly absent. Therefore, it is crucial to ask which features of minds have we replicated, which are missing, and if that matters. One core feature that humans bring to tasks, when dealing with the ambiguity, emergent knowledge, and social context presented by the world, is reflection. Yet this capability is completely missing from current mainstream AI. In this paper we ask what reflective AI might look like. Then, drawing on notions of reflection in complex systems, cognitive science, and agents, we sketch an architecture for reflective AI agents, and highlight ways forward.

随着人工智能(AI)技术的发展,我们越来越多地将脑力劳动委托给机器。然而,如今的人工智能系统在完成这些任务时,通常会出现洞察力和理解力不平衡的现象:新的、更深刻的洞察力出现了,但人类思维以前会为这项活动带来的许多重要品质却完全缺失。因此,关键是要问我们复制了人类思维的哪些特征,缺少了哪些特征,以及这些特征是否重要。在处理世界所呈现的模糊性、新兴知识和社会背景时,人类为任务带来的一个核心特征就是反思。然而,目前的主流人工智能完全不具备这种能力。在本文中,我们将探讨反思型人工智能可能是什么样的。然后,我们借鉴复杂系统、认知科学和代理中的反思概念,勾勒出反思型人工智能代理的架构,并强调了前进的方向。
{"title":"Reflective Artificial Intelligence","authors":"Peter R. Lewis, Ştefan Sarkadi","doi":"10.1007/s11023-024-09664-2","DOIUrl":"https://doi.org/10.1007/s11023-024-09664-2","url":null,"abstract":"<p>As artificial intelligence (AI) technology advances, we increasingly delegate mental tasks to machines. However, today’s AI systems usually do these tasks with an unusual imbalance of insight and understanding: new, deeper insights are present, yet many important qualities that a human mind would have previously brought to the activity are utterly absent. Therefore, it is crucial to ask which features of minds have we replicated, which are missing, and if that matters. One core feature that humans bring to tasks, when dealing with the ambiguity, emergent knowledge, and social context presented by the world, is reflection. Yet this capability is completely missing from current mainstream AI. In this paper we ask what <i>reflective AI</i> might look like. Then, drawing on notions of reflection in complex systems, cognitive science, and agents, we sketch an architecture for reflective AI agents, and highlight ways forward.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regulation by Design: Features, Practices, Limitations, and Governance Implications 设计监管:特点、做法、局限性和治理影响
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-05-17 DOI: 10.1007/s11023-024-09675-z
Kostina Prifti, Jessica Morley, Claudio Novelli, Luciano Floridi

Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, regulatees, methods, and technologies. Building on that structure, we distinguish among three types of RBD practices: compliance by design, value creation by design, and optimisation by design. We then explore the challenges and limitations of RBD practices, which stem from risks associated with compliance by design, contextual limitations, or methodological uncertainty. Finally, we examine the governance implications of RBD and outline possible future directions of the research field and its practices.

设计调控(RBD)是一个不断发展的研究领域,它探索、发展并批评设计的调控功能。在本文中,我们对现有文献进行了定性专题综述。其目的是探索和分析 "按需设计 "的核心特征、实践、局限性以及相关的治理影响。为了实现这一目标,我们研究了数字技术背景下有关成果预算编制的现有文献。我们首先确定并构建了限制性商业发展的核心特征,即目标、监管者、被监管者、方法和技术。在此基础上,我们区分了三类限制性商业发展实践:设计合规、设计创造价值和设计优化。然后,我们探讨了 RBD 实践所面临的挑战和局限性,这些挑战和局限性来自与设计合规性相关的风险、环境局限性或方法的不确定性。最后,我们探讨了 RBD 的治理意义,并概述了该研究领域及其实践未来可能的发展方向。
{"title":"Regulation by Design: Features, Practices, Limitations, and Governance Implications","authors":"Kostina Prifti, Jessica Morley, Claudio Novelli, Luciano Floridi","doi":"10.1007/s11023-024-09675-z","DOIUrl":"https://doi.org/10.1007/s11023-024-09675-z","url":null,"abstract":"<p>Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, regulatees, methods, and technologies. Building on that structure, we distinguish among three types of RBD practices: compliance by design, value creation by design, and optimisation by design. We then explore the challenges and limitations of RBD practices, which stem from risks associated with compliance by design, contextual limitations, or methodological uncertainty. Finally, we examine the governance implications of RBD and outline possible future directions of the research field and its practices.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141060283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Transnational Fairness in Machine Learning: A Case Study in Disaster Response Systems 在机器学习中实现跨国公平:灾害响应系统案例研究
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-05-09 DOI: 10.1007/s11023-024-09663-3
Cem Kozcuer, Anne Mollen, Felix Bießmann

Research on fairness in machine learning (ML) has been largely focusing on individual and group fairness. With the adoption of ML-based technologies as assistive technology in complex societal transformations or crisis situations on a global scale these existing definitions fail to account for algorithmic fairness transnationally. We propose to complement existing perspectives on algorithmic fairness with a notion of transnational algorithmic fairness and take first steps towards an analytical framework. We exemplify the relevance of a transnational fairness assessment in a case study on a disaster response system using images from online social media. In the presented case, ML systems are used as a support tool in categorizing and classifying images from social media after a disaster event as an almost instantly available source of information for coordinating disaster response. We present an empirical analysis assessing the transnational fairness of the application’s outputs-based on national socio-demographic development indicators as potentially discriminatory attributes. In doing so, the paper combines interdisciplinary perspectives from data analytics, ML, digital media studies and media sociology in order to address fairness beyond the technical system. The case study investigated reflects an embedded perspective of peoples’ everyday media use and social media platforms as the producers of sociality and processing data-with relevance far beyond the case of algorithmic fairness in disaster scenarios. Especially in light of the concentration of artificial intelligence (AI) development in the Global North and a perceived hegemonic constellation, we argue that transnational fairness offers a perspective on global injustices in relation to AI development and application that has the potential to substantiate discussions by identifying gaps in data and technology. These analyses ultimately will enable researchers and policy makers to derive actionable insights that could alleviate existing problems with fair use of AI technology and mitigate risks associated with future developments.

有关机器学习(ML)公平性的研究主要集中在个人和群体公平性方面。随着基于 ML 的技术作为辅助技术被广泛应用于全球复杂的社会变革或危机局势中,这些现有的定义未能考虑到跨国算法公平性。我们建议用跨国算法公平的概念来补充现有的算法公平观点,并为建立分析框架迈出第一步。我们利用网络社交媒体中的图片对灾难响应系统进行了案例研究,以实例说明了跨国公平性评估的相关性。在介绍的案例中,ML 系统被用作一种支持工具,在灾难事件发生后对社交媒体中的图片进行分类和分级,作为协调灾难响应的几乎即时可用的信息来源。我们提出了一项实证分析,以国家社会人口发展指标作为潜在的歧视性属性,评估应用程序输出的跨国公平性。在此过程中,本文结合了数据分析、ML、数字媒体研究和媒体社会学等跨学科视角,以解决技术系统之外的公平性问题。所调查的案例研究反映了人们日常媒体使用的嵌入式视角,以及社交媒体平台作为社会性和数据处理的生产者--其相关性远远超出了灾难场景中的算法公平性案例。特别是考虑到人工智能(AI)的发展集中在全球北部地区,以及人们所认为的霸权格局,我们认为,跨国公平性提供了一个视角,来审视与人工智能发展和应用相关的全球不公平现象,并有可能通过确定数据和技术方面的差距来证实讨论。这些分析最终将使研究人员和政策制定者获得可操作的见解,从而缓解公平使用人工智能技术的现有问题,并降低与未来发展相关的风险。
{"title":"Towards Transnational Fairness in Machine Learning: A Case Study in Disaster Response Systems","authors":"Cem Kozcuer, Anne Mollen, Felix Bießmann","doi":"10.1007/s11023-024-09663-3","DOIUrl":"https://doi.org/10.1007/s11023-024-09663-3","url":null,"abstract":"<p>Research on fairness in machine learning (ML) has been largely focusing on individual and group fairness. With the adoption of ML-based technologies as assistive technology in complex societal transformations or crisis situations on a global scale these existing definitions fail to account for algorithmic fairness transnationally. We propose to complement existing perspectives on algorithmic fairness with a notion of transnational algorithmic fairness and take first steps towards an analytical framework. We exemplify the relevance of a transnational fairness assessment in a case study on a disaster response system using images from online social media. In the presented case, ML systems are used as a support tool in categorizing and classifying images from social media after a disaster event as an almost instantly available source of information for coordinating disaster response. We present an empirical analysis assessing the transnational fairness of the application’s outputs-based on national socio-demographic development indicators as potentially discriminatory attributes. In doing so, the paper combines interdisciplinary perspectives from data analytics, ML, digital media studies and media sociology in order to address fairness beyond the technical system. The case study investigated reflects an embedded perspective of peoples’ everyday media use and social media platforms as the producers of sociality and processing data-with relevance far beyond the case of algorithmic fairness in disaster scenarios. Especially in light of the concentration of artificial intelligence (AI) development in the Global North and a perceived hegemonic constellation, we argue that transnational fairness offers a perspective on global injustices in relation to AI development and application that has the potential to substantiate discussions by identifying gaps in data and technology. These analyses ultimately will enable researchers and policy makers to derive actionable insights that could alleviate existing problems with fair use of AI technology and mitigate risks associated with future developments.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140937549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Within Online Discussions: Rational, Civil, Privileged? 在线讨论中的人工智能:理性、文明、特权?
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-05-04 DOI: 10.1007/s11023-024-09658-0
Jonas Aaron Carstens, Dennis Friess

While early optimists have seen online discussions as potential spaces for deliberation, the reality of many online spaces is characterized by incivility and irrationality. Increasingly, AI tools are considered as a solution to foster deliberative discourse. Against the backdrop of previous research, we show that AI tools for online discussions heavily focus on the deliberative norms of rationality and civility. In the operationalization of those norms for AI tools, the complex deliberative dimensions are simplified, and the focus lies on the detection of argumentative structures in argument mining or verbal markers of supposedly uncivil comments. If the fairness of such tools is considered, the focus lies on data bias and an input–output frame of the problem. We argue that looking beyond bias and analyzing such applications through a sociotechnical frame reveals how they interact with social hierarchies and inequalities, reproducing patterns of exclusion. The current focus on verbal markers of incivility and argument mining risks excluding minority voices and privileges those who have more access to education. Finally, we present a normative argument why examining AI tools for online discourses through a sociotechnical frame is ethically preferable, as ignoring the predicable negative effects we describe would present a form of objectionable indifference.

虽然早期的乐观主义者将在线讨论视为潜在的议事空间,但现实中的许多在线空间却充满了不文明和非理性。越来越多的人将人工智能工具视为促进协商讨论的解决方案。在以往研究的背景下,我们发现用于在线讨论的人工智能工具非常注重理性和文明的议事规范。在为人工智能工具操作这些规范时,复杂的商议维度被简化了,重点在于在论点挖掘中检测论证结构或所谓不文明评论的语言标记。如果考虑此类工具的公平性,则重点在于数据偏差和问题的输入-输出框架。我们认为,超越偏见并通过社会技术框架分析此类应用,可以揭示它们如何与社会等级制度和不平等现象相互作用,并再现排斥模式。目前对不文明行为的语言标记和论据挖掘的关注有可能排斥少数群体的声音,并使那些有更多受教育机会的人享有特权。最后,我们提出了一个规范性论点,说明为什么通过社会技术框架来研究人工智能工具对网络话语的影响在伦理上是可取的,因为忽视我们所描述的可预见的负面影响将是一种令人反感的漠不关心。
{"title":"AI Within Online Discussions: Rational, Civil, Privileged?","authors":"Jonas Aaron Carstens, Dennis Friess","doi":"10.1007/s11023-024-09658-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09658-0","url":null,"abstract":"<p>While early optimists have seen online discussions as potential spaces for deliberation, the reality of many online spaces is characterized by incivility and irrationality. Increasingly, AI tools are considered as a solution to foster deliberative discourse. Against the backdrop of previous research, we show that AI tools for online discussions heavily focus on the deliberative norms of rationality and civility. In the operationalization of those norms for AI tools, the complex deliberative dimensions are simplified, and the focus lies on the detection of argumentative structures in argument mining or verbal markers of supposedly uncivil comments. If the fairness of such tools is considered, the focus lies on data bias and an input–output frame of the problem. We argue that looking beyond bias and analyzing such applications through a sociotechnical frame reveals how they interact with social hierarchies and inequalities, reproducing patterns of exclusion. The current focus on verbal markers of incivility and argument mining risks excluding minority voices and privileges those who have more access to education. Finally, we present a normative argument why examining AI tools for online discourses through a sociotechnical frame is ethically preferable, as ignoring the predicable negative effects we describe would present a form of objectionable indifference.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Genealogical Approach to Algorithmic Bias 算法偏差的家谱学方法
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-05-02 DOI: 10.1007/s11023-024-09672-2
Marta Ziosi, David Watson, Luciano Floridi

The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires ex post solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions and offer two main contributions. One is constructive: we develop a theoretical framework to classify these approaches according to their relevance for bias as evidence of social disparities. We draw on Pearl’s ladder of causation (Causality: models, reasoning, and inference. Cambridge University Press, Cambridge, 2000, Causality, 2nd edn. Cambridge University Press, Cambridge, 2009. https://doi.org/10.1017/CBO9780511803161) to order these XAI approaches concerning their ability to answer fairness-relevant questions and identify fairness-relevant solutions. The other contribution is critical: we evaluate these approaches in terms of their assumptions about the role of protected characteristics in discriminatory outcomes. We achieve this by building on Kohler-Hausmann’s (Northwest Univ Law Rev 113(5):1163–1227, 2019) constructivist theory of discrimination. We derive three recommendations for XAI practitioners to develop and AI policymakers to regulate tools that address algorithmic bias in its conditions and hence mitigate its future occurrence.

公平、问责和透明(FAccT)文献倾向于将偏差作为一个需要事后解决方案(如公平度量)的问题来关注,而不是解决(再)产生偏差的潜在社会和技术条件。在本文中,我们提出了一种补充策略,将谱系学作为一种建设性的认识论批判,从促成算法偏差的条件来解释算法偏差。我们将重点放在 XAI 特征归因(夏普利值)和反事实方法上,将其作为衡量这些条件的潜在工具,并提供两个主要贡献。其一是建设性的:我们建立了一个理论框架,根据这些方法与作为社会差异证据的偏见的相关性对其进行分类。我们借鉴了珀尔的因果关系阶梯(《因果关系:模型、推理和推论》。剑桥大学出版社,剑桥,2000 年,《因果关系》,第二版。https://doi.org/10.1017/CBO9780511803161),对这些 XAI 方法回答公平相关问题和确定公平相关解决方案的能力进行排序。另一个重要贡献是:我们根据这些方法对受保护特征在歧视性结果中所起作用的假设,对其进行评估。为此,我们以科勒-豪斯曼(Northwest Univ Law Rev 113(5):1163-1227, 2019)的歧视建构主义理论为基础。我们得出了三项建议,供 XAI 从业人员开发和 AI 政策制定者规范工具,以解决算法偏见的条件,从而减少其未来的发生。
{"title":"A Genealogical Approach to Algorithmic Bias","authors":"Marta Ziosi, David Watson, Luciano Floridi","doi":"10.1007/s11023-024-09672-2","DOIUrl":"https://doi.org/10.1007/s11023-024-09672-2","url":null,"abstract":"<p>The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires <i>ex post</i> solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions and offer two main contributions. One is constructive: we develop a theoretical framework to classify these approaches according to their relevance for bias as evidence of social disparities. We draw on Pearl’s ladder of causation (Causality: models, reasoning, and inference. Cambridge University Press, Cambridge, 2000, Causality, 2nd edn. Cambridge University Press, Cambridge, 2009. https://doi.org/10.1017/CBO9780511803161) to order these XAI approaches concerning their ability to answer fairness-relevant questions and identify fairness-relevant solutions. The other contribution is critical: we evaluate these approaches in terms of their assumptions about the role of protected characteristics in discriminatory outcomes. We achieve this by building on Kohler-Hausmann’s (Northwest Univ Law Rev 113(5):1163–1227, 2019) constructivist theory of discrimination. We derive three recommendations for XAI practitioners to develop and AI policymakers to regulate tools that address algorithmic bias in its conditions and hence mitigate its future occurrence.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gamification, Side Effects, and Praise and Blame for Outcomes 游戏化、副作用和结果的褒贬
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-25 DOI: 10.1007/s11023-024-09661-5
Sven Nyholm
{"title":"Gamification, Side Effects, and Praise and Blame for Outcomes","authors":"Sven Nyholm","doi":"10.1007/s11023-024-09661-5","DOIUrl":"https://doi.org/10.1007/s11023-024-09661-5","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140655011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anthropomorphising Machines and Computerising Minds: The Crosswiring of Languages between Artificial Intelligence and Brain & Cognitive Sciences 机器拟人化与思想计算机化:人工智能与脑科学和认知科学之间的语言交织
IF 7.4 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2024-04-25 DOI: 10.1007/s11023-024-09670-4
Luciano Floridi, Anna C Nobre

The article discusses the process of “conceptual borrowing”, according to which, when a new discipline emerges, it develops its technical vocabulary also by appropriating terms from other neighbouring disciplines. The phenomenon is likened to Carl Schmitt’s observation that modern political concepts have theological roots. The authors argue that, through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers. The crosswiring between the technical languages of these disciplines is not merely metaphorical but can lead to confusion, and damaging assumptions and consequences. The article ends on an optimistic note about the self-adjusting nature of technical meanings in language and the ability to leave misleading conceptual baggage behind when confronted with advancement in understanding and factual knowledge.

文章讨论了 "概念借用 "的过程,根据这一过程,当一门新学科出现时,它也会通过挪用其他相邻学科的术语来发展自己的技术词汇。这种现象被比作卡尔-施密特(Carl Schmitt)关于现代政治概念具有神学根源的观点。作者认为,通过广泛的概念借用,人工智能最终将计算机拟人化地描述为具有心理特性的计算大脑,而脑科学和认知科学最终将大脑和思维以计算和信息的方式描述为生物计算机。这些学科的技术语言之间的交织不仅仅是一种隐喻,还可能导致混淆、破坏性的假设和后果。文章最后乐观地指出,语言中的技术含义具有自我调整的性质,当面对理解和事实知识的进步时,人们有能力抛开误导性的概念包袱。
{"title":"Anthropomorphising Machines and Computerising Minds: The Crosswiring of Languages between Artificial Intelligence and Brain & Cognitive Sciences","authors":"Luciano Floridi, Anna C Nobre","doi":"10.1007/s11023-024-09670-4","DOIUrl":"https://doi.org/10.1007/s11023-024-09670-4","url":null,"abstract":"<p>The article discusses the process of “conceptual borrowing”, according to which, when a new discipline emerges, it develops its technical vocabulary also by appropriating terms from other neighbouring disciplines. The phenomenon is likened to Carl Schmitt’s observation that modern political concepts have theological roots. The authors argue that, through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers. The crosswiring between the technical languages of these disciplines is not merely metaphorical but can lead to confusion, and damaging assumptions and consequences. The article ends on an optimistic note about the self-adjusting nature of technical meanings in language and the ability to leave misleading conceptual baggage behind when confronted with advancement in understanding and factual knowledge.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140804015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Minds and Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1