首页 > 最新文献

Minds and Machines最新文献

英文 中文
Mapping the Ethics of Generative AI: A Comprehensive Scoping Review 绘制生成式人工智能的伦理地图:全面范围审查
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-17 DOI: 10.1007/s11023-024-09694-w
Thilo Hagendorff

The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature. The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios.

生成式人工智能的出现及其在社会中的广泛应用,引发了有关其伦理影响和风险的激烈讨论。这些风险往往不同于传统的辨别式机器学习。为了综合最近的讨论并绘制其规范概念图,我们对生成式人工智能(尤其包括大型语言模型和文本到图像模型)的伦理问题进行了一次范围审查。我们的分析对 19 个主题领域的 378 个规范性问题进行了分类,并根据它们在文献中的流行程度进行了排序。这项研究为学者、从业人员或政策制定者提供了一个全面的概览,浓缩了围绕公平性、安全性、有害内容、幻觉、隐私、交互风险、安全性、一致性、社会影响等方面的伦理争论。我们讨论了研究结果,评估了文献中的失衡现象,并探讨了未经证实的风险情景。
{"title":"Mapping the Ethics of Generative AI: A Comprehensive Scoping Review","authors":"Thilo Hagendorff","doi":"10.1007/s11023-024-09694-w","DOIUrl":"https://doi.org/10.1007/s11023-024-09694-w","url":null,"abstract":"<p>The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature. The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"2 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Justifiable Investment in AI for Healthcare: Aligning Ambition with Reality 对医疗保健领域人工智能的合理投资:将雄心与现实相结合
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-11 DOI: 10.1007/s11023-024-09692-y
Kassandra Karpathakis, Jessica Morley, Luciano Floridi

Healthcare systems are grappling with critical challenges, including chronic diseases in aging populations, unprecedented health care staffing shortages and turnover, scarce resources, unprecedented demands and wait times, escalating healthcare expenditure, and declining health outcomes. As a result, policymakers and healthcare executives are investing in artificial intelligence (AI) solutions to increase operational efficiency, lower health care costs, and improve patient care. However, current level of investment in developing healthcare AI among members of the global digital health partnership does not seem to yield a high return yet. This is mainly due to underinvestment in the supporting infrastructure necessary to enable the successful implementation of AI. If a healthcare-specific AI winter is to be avoided, it is paramount that this disparity in the level of investment in the development of AI itself and in the development of the necessary supporting system components is evened out.

医疗保健系统正在努力应对严峻的挑战,包括人口老龄化带来的慢性疾病、前所未有的医疗保健人员短缺和流失、资源稀缺、前所未有的需求和等待时间、医疗保健支出不断攀升以及医疗效果不断下降。因此,政策制定者和医疗保健高管正在投资人工智能(AI)解决方案,以提高运营效率、降低医疗保健成本并改善患者护理。然而,全球数字医疗合作伙伴关系成员目前在开发医疗人工智能方面的投资水平似乎尚未产生高回报。这主要是由于对成功实施人工智能所需的配套基础设施投资不足。如果要避免出现专门针对医疗保健的人工智能寒冬,最重要的是拉平在开发人工智能本身和开发必要的支持系统组件方面的投资水平差距。
{"title":"A Justifiable Investment in AI for Healthcare: Aligning Ambition with Reality","authors":"Kassandra Karpathakis, Jessica Morley, Luciano Floridi","doi":"10.1007/s11023-024-09692-y","DOIUrl":"https://doi.org/10.1007/s11023-024-09692-y","url":null,"abstract":"<p>Healthcare systems are grappling with critical challenges, including chronic diseases in aging populations, unprecedented health care staffing shortages and turnover, scarce resources, unprecedented demands and wait times, escalating healthcare expenditure, and declining health outcomes. As a result, policymakers and healthcare executives are investing in artificial intelligence (AI) solutions to increase operational efficiency, lower health care costs, and improve patient care. However, current level of investment in developing healthcare AI among members of the global digital health partnership does not seem to yield a high return yet. This is mainly due to underinvestment in the supporting infrastructure necessary to enable the successful implementation of AI. If a healthcare-specific AI winter is to be avoided, it is paramount that this disparity in the level of investment in the development of AI itself and in the development of the necessary supporting system components is evened out.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"16 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement fl-IRT与心理测量学相结合,改善NLP偏差测量
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1007/s11023-024-09695-9
Dominik Bachmann, Oskar van der Wal, Edita Chvojka, Willem H. Zuidema, Leendert van Maanen, Katrin Schulz

To prevent ordinary people from being harmed by natural language processing (NLP) technology, finding ways to measure the extent to which a language model is biased (e.g., regarding gender) has become an active area of research. One popular class of NLP bias measures are bias benchmark datasets—collections of test items that are meant to assess a language model’s preference for stereotypical versus non-stereotypical language. In this paper, we argue that such bias benchmarks should be assessed with models from the psychometric framework of item response theory (IRT). Specifically, we tie an introduction to basic IRT concepts and models with a discussion of how they could be relevant to the evaluation, interpretation and improvement of bias benchmark datasets. Regarding evaluation, IRT provides us with methodological tools for assessing the quality of both individual test items (e.g., the extent to which an item can differentiate highly biased from less biased language models) as well as benchmarks as a whole (e.g., the extent to which the benchmark allows us to assess not only severe but also subtle levels of model bias). Through such diagnostic tools, the quality of benchmark datasets could be improved, for example by deleting or reworking poorly performing items. Finally, in regards to interpretation, we argue that IRT models’ estimates for language model bias are conceptually superior to traditional accuracy-based evaluation metrics, as the former take into account more information than just whether or not a language model provided a biased response.

为了防止普通人受到自然语言处理(NLP)技术的伤害,寻找测量语言模型偏差程度(如性别)的方法已成为一个活跃的研究领域。一类流行的 NLP 偏差测量方法是偏差基准数据集--测试项目集,旨在评估语言模型对刻板语言和非刻板语言的偏好程度。在本文中,我们认为此类偏差基准应使用项目反应理论(IRT)心理测量框架中的模型进行评估。具体来说,我们将介绍 IRT 的基本概念和模型,并讨论它们如何与偏差基准数据集的评估、解释和改进相关。在评估方面,IRT 为我们提供了评估单个测试项目质量(例如,一个项目能在多大程度上区分高偏差和低偏差的语言模型)以及整体基准(例如,基准能在多大程度上让我们不仅评估严重的模型偏差,也评估微妙的模型偏差)的方法工具。通过这些诊断工具,基准数据集的质量可以得到改善,例如删除或重新制作表现不佳的项目。最后,在解释方面,我们认为 IRT 模型对语言模型偏差的估计在概念上优于传统的基于准确性的评估指标,因为前者考虑到了更多的信息,而不仅仅是语言模型是否提供了有偏差的反应。
{"title":"fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement","authors":"Dominik Bachmann, Oskar van der Wal, Edita Chvojka, Willem H. Zuidema, Leendert van Maanen, Katrin Schulz","doi":"10.1007/s11023-024-09695-9","DOIUrl":"https://doi.org/10.1007/s11023-024-09695-9","url":null,"abstract":"<p>To prevent ordinary people from being harmed by natural language processing (NLP) technology, finding ways to measure the extent to which a language model is biased (e.g., regarding gender) has become an active area of research. One popular class of NLP bias measures are bias benchmark datasets—collections of test items that are meant to assess a language model’s preference for stereotypical versus non-stereotypical language. In this paper, we argue that such bias benchmarks should be assessed with models from the psychometric framework of item response theory (IRT). Specifically, we tie an introduction to basic IRT concepts and models with a discussion of how they could be relevant to the evaluation, interpretation and improvement of bias benchmark datasets. Regarding evaluation, IRT provides us with methodological tools for assessing the quality of both individual test items (e.g., the extent to which an item can differentiate highly biased from less biased language models) as well as benchmarks as a whole (e.g., the extent to which the benchmark allows us to assess not only severe but also subtle levels of model bias). Through such diagnostic tools, the quality of benchmark datasets could be improved, for example by deleting or reworking poorly performing items. Finally, in regards to interpretation, we argue that IRT models’ estimates for language model bias are conceptually superior to traditional accuracy-based evaluation metrics, as the former take into account more information than just whether or not a language model provided a biased response.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"40 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence for the Internal Democracy of Political Parties 人工智能促进政党内部民主
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1007/s11023-024-09693-x
Claudio Novelli, Giuliano Formisano, Prathm Juneja, Giulia Sandri, Luciano Floridi

The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to partial data collection, rare updates, and significant resource demands. To address these issues, the article suggests that specific data management and Machine Learning techniques, such as natural language processing and sentiment analysis, can improve the measurement and practice of IPD.

文章认为,人工智能可以加强对政党内部民主进程(即党内民主(IPD))的测量和实施。文章指出了衡量 IPD 的传统方法的局限性,这些方法通常依赖于正式参数、自我报告数据以及调查等工具。这些局限性导致数据收集不全面、很少更新以及大量的资源需求。为解决这些问题,文章建议采用特定的数据管理和机器学习技术(如自然语言处理和情感分析)来改进 IPD 的测量和实践。
{"title":"Artificial Intelligence for the Internal Democracy of Political Parties","authors":"Claudio Novelli, Giuliano Formisano, Prathm Juneja, Giulia Sandri, Luciano Floridi","doi":"10.1007/s11023-024-09693-x","DOIUrl":"https://doi.org/10.1007/s11023-024-09693-x","url":null,"abstract":"<p>The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to partial data collection, rare updates, and significant resource demands. To address these issues, the article suggests that specific data management and Machine Learning techniques, such as natural language processing and sentiment analysis, can improve the measurement and practice of IPD.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"6 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Causal Analysis of Harm 危害的因果分析
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-21 DOI: 10.1007/s11023-024-09689-7
Sander Beckers, Hana Chockler, Joseph Y. Halpern

As autonomous systems rapidly become ubiquitous, there is a growing need for a legal and regulatory framework that addresses when and how such a system harms someone. There have been several attempts within the philosophy literature to define harm, but none of them has proven capable of dealing with the many examples that have been presented, leading some to suggest that the notion of harm should be abandoned and “replaced by more well-behaved notions”. As harm is generally something that is caused, most of these definitions have involved causality at some level. Yet surprisingly, none of them makes use of causal models and the definitions of actual causality that they can express. In this paper, which is an expanded version of the conference paper Beckers et al. (Adv Neural Inform Process Syst 35:2365–2376, 2022), we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality. The key features of our definition are that it is based on contrastive causation and uses a default utility to which the utility of actual outcomes is compared. We show that our definition is able to handle the examples from the literature, and illustrate its importance for reasoning about situations involving autonomous systems.

随着自主系统的迅速普及,人们越来越需要一个法律和监管框架来解决此类系统何时以及如何对他人造成伤害的问题。哲学文献曾多次尝试对伤害进行定义,但事实证明,这些定义都无法应对所提出的众多实例,因此有人建议放弃伤害的概念,"代之以更规范的概念"。由于危害通常是由原因造成的,因此这些定义大多在一定程度上涉及因果关系。然而,令人惊讶的是,这些定义都没有利用因果模型及其所能表达的实际因果关系的定义。本文是 Beckers 等人的会议论文(Adv Neural Inform Process Syst 35:2365-2376, 2022)的扩充版,我们正式定义了一个定性的伤害概念,该概念使用因果模型,并基于众所周知的实际因果关系定义。我们定义的主要特点是,它基于对比因果关系,并使用默认效用与实际结果的效用进行比较。我们展示了我们的定义能够处理文献中的例子,并说明了它对涉及自主系统的情况进行推理的重要性。
{"title":"A Causal Analysis of Harm","authors":"Sander Beckers, Hana Chockler, Joseph Y. Halpern","doi":"10.1007/s11023-024-09689-7","DOIUrl":"https://doi.org/10.1007/s11023-024-09689-7","url":null,"abstract":"<p>As autonomous systems rapidly become ubiquitous, there is a growing need for a legal and regulatory framework that addresses when and how such a system harms someone. There have been several attempts within the philosophy literature to define harm, but none of them has proven capable of dealing with the many examples that have been presented, leading some to suggest that the notion of harm should be abandoned and “replaced by more well-behaved notions”. As harm is generally something that is caused, most of these definitions have involved causality at some level. Yet surprisingly, none of them makes use of causal models and the definitions of actual causality that they can express. In this paper, which is an expanded version of the conference paper Beckers et al. (Adv Neural Inform Process Syst 35:2365–2376, 2022), we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality. The key features of our definition are that it is based on <i>contrastive</i> causation and uses a default utility to which the utility of actual outcomes is compared. We show that our definition is able to handle the examples from the literature, and illustrate its importance for reasoning about situations involving autonomous systems.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"39 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141744714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena 利用可解释的机器学习进行科学推断:分析模型以了解真实世界的现象
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-15 DOI: 10.1007/s11023-024-09691-z
Timo Freiesleben, Gunnar König, Christoph Molnar, Álvaro Tejero-Cantero

To learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern machine learning (ML) models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for designing IML methods—termed ’property descriptors’—that illuminate not just the model, but also the phenomenon it represents. We demonstrate that property descriptors, grounded in statistical learning theory, can effectively reveal relevant properties of the joint probability distribution of the observational data. We identify existing IML methods suited for scientific inference and provide a guide for developing new descriptors with quantified epistemic uncertainty. Our framework empowers scientists to harness ML models for inference, and provides directions for future IML research to support scientific understanding.

为了了解现实世界的现象,科学家们历来使用具有明确可解释要素的模型。然而,现代机器学习(ML)模型虽然具有强大的预测能力,却缺乏这种直接的元素可解释性(如神经网络权重)。可解释机器学习(IML)通过对模型进行整体分析以得出解释,提供了一种解决方案。然而,目前的 IML 研究主要集中在审核 ML 模型,而不是利用它们进行科学推断。我们的研究填补了这一空白,提出了一个设计 IML 方法的框架--称为 "属性描述符",它不仅能阐明模型,还能阐明模型所代表的现象。我们证明,以统计学习理论为基础的属性描述符能有效揭示观测数据联合概率分布的相关属性。我们确定了适合科学推断的现有 IML 方法,并为开发具有量化认识不确定性的新描述符提供了指导。我们的框架使科学家能够利用 ML 模型进行推断,并为未来的 IML 研究提供了方向,以支持科学理解。
{"title":"Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena","authors":"Timo Freiesleben, Gunnar König, Christoph Molnar, Álvaro Tejero-Cantero","doi":"10.1007/s11023-024-09691-z","DOIUrl":"https://doi.org/10.1007/s11023-024-09691-z","url":null,"abstract":"<p>To learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern machine learning (ML) models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for designing IML methods—termed ’property descriptors’—that illuminate not just the model, but also the phenomenon it represents. We demonstrate that property descriptors, grounded in statistical learning theory, can effectively reveal relevant properties of the joint probability distribution of the observational data. We identify existing IML methods suited for scientific inference and provide a guide for developing new descriptors with quantified epistemic uncertainty. Our framework empowers scientists to harness ML models for inference, and provides directions for future IML research to support scientific understanding.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"27 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141721767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Submarine Cables and the Risks to Digital Sovereignty 海底电缆和数字主权面临的风险
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-08 DOI: 10.1007/s11023-024-09683-z
Abra Ganz, Martina Camellini, Emmie Hine, Claudio Novelli, Huw Roberts, Luciano Floridi

The international network of submarine cables plays a crucial role in facilitating global telecommunications connectivity, carrying over 99% of all internet traffic. However, submarine cables challenge digital sovereignty due to their ownership structure, cross-jurisdictional nature, and vulnerabilities to malicious actors. In this article, we assess these challenges, current policy initiatives designed to mitigate them, and the limitations of these initiatives. The nature of submarine cables curtails a state’s ability to regulate the infrastructure on which it relies, reduces its data security, and threatens its ability to provide telecommunication services. States currently address these challenges through regulatory controls over submarine cables and associated companies, investing in the development of additional cable infrastructure, and implementing physical protection measures for the cables themselves. Despite these efforts, the effectiveness of current mechanisms is hindered by significant obstacles arising from technical limitations and a lack of international coordination on regulation. We conclude by noting how these obstacles lead to gaps in states’ policies and point towards how they could be improved to create a proactive approach to submarine cable governance that defends states’ digital sovereignty.

国际海底电缆网络在促进全球电信连接方面发挥着至关重要的作用,承载着 99% 以上的互联网流量。然而,海底光缆因其所有权结构、跨司法管辖区的性质以及易受恶意行为者攻击的弱点,对数字主权构成了挑战。在本文中,我们将评估这些挑战、当前旨在缓解这些挑战的政策措施以及这些措施的局限性。海底电缆的性质削弱了国家监管其所依赖的基础设施的能力,降低了其数据安全性,并威胁到其提供电信服务的能力。目前,各国通过对海底电缆和相关公司进行监管控制、投资开发更多电缆基础设施以及对电缆本身实施物理保护措施来应对这些挑战。尽管做出了这些努力,但由于技术限制和监管方面缺乏国际协调造成的重大障碍,目前机制的有效性受到了阻碍。最后,我们指出了这些障碍是如何导致国家政策出现漏洞的,并指出了如何改进这些政策,以创建一种积极的海底电缆治理方法,捍卫国家的数字主权。
{"title":"Submarine Cables and the Risks to Digital Sovereignty","authors":"Abra Ganz, Martina Camellini, Emmie Hine, Claudio Novelli, Huw Roberts, Luciano Floridi","doi":"10.1007/s11023-024-09683-z","DOIUrl":"https://doi.org/10.1007/s11023-024-09683-z","url":null,"abstract":"<p>The international network of submarine cables plays a crucial role in facilitating global telecommunications connectivity, carrying over 99% of all internet traffic. However, submarine cables challenge digital sovereignty due to their ownership structure, cross-jurisdictional nature, and vulnerabilities to malicious actors. In this article, we assess these challenges, current policy initiatives designed to mitigate them, and the limitations of these initiatives. The nature of submarine cables curtails a state’s ability to regulate the infrastructure on which it relies, reduces its data security, and threatens its ability to provide telecommunication services. States currently address these challenges through regulatory controls over submarine cables and associated companies, investing in the development of additional cable infrastructure, and implementing physical protection measures for the cables themselves. Despite these efforts, the effectiveness of current mechanisms is hindered by significant obstacles arising from technical limitations and a lack of international coordination on regulation. We conclude by noting how these obstacles lead to gaps in states’ policies and point towards how they could be improved to create a proactive approach to submarine cable governance that defends states’ digital sovereignty.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"20 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141566781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems 专家还是权威?假定人工智能系统具有认识论优越性的奇特案例
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-06 DOI: 10.1007/s11023-024-09681-1
Andrea Ferrario, Alessandro Facchini, Alberto Termine

The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts of expertise and authority from virtue epistemology, we show that epistemic expertise requires a relation with understanding that AI systems do not satisfy and intellectual abilities that these systems do not manifest. Further, following the Distribution Cognition theory and adapting an account by Croce on the virtues of collective epistemic agents to the case of human-AI interactions we show that, if an AI system is successfully appropriated by a human agent, a hybrid epistemic agent emerges, which can become both an epistemic expert and an authority. Consequently, we claim that the aforementioned hybrid agent is the appropriate object of a discourse around trust in AI and the epistemic obligations that stem from its epistemic superiority.

当代以机器学习为基础的人工智能系统的高预测准确性使一些学者认为,在某些情况下,我们应该赋予它们认识论上的专业知识和权威,使其超越人类。这种观点认为,人类在认识论上有义务依赖高精度人工智能系统的预测。与这种观点相反,我们在这项研究中声称,不可能赋予人工智能系统真正的认识论专业知识。事实上,根据美德认识论中关于专业知识和权威的论述,我们证明了认识论专业知识需要与理解力之间的关系,而人工智能系统并不满足这种关系,也不具备这些系统所不具备的智力能力。此外,按照分布认知理论,并根据克罗齐关于集体认识论代理的美德的论述,我们证明,如果人工智能系统被人类代理成功占有,就会出现一个混合认识论代理,它既可以成为认识论专家,也可以成为权威。因此,我们主张,上述混合代理是围绕对人工智能的信任及其认识论优势所产生的认识论义务展开讨论的适当对象。
{"title":"Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems","authors":"Andrea Ferrario, Alessandro Facchini, Alberto Termine","doi":"10.1007/s11023-024-09681-1","DOIUrl":"https://doi.org/10.1007/s11023-024-09681-1","url":null,"abstract":"<p>The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts of expertise and authority from virtue epistemology, we show that epistemic expertise requires a relation with understanding that AI systems do not satisfy and intellectual abilities that these systems do not manifest. Further, following the Distribution Cognition theory and adapting an account by Croce on the virtues of collective epistemic agents to the case of human-AI interactions we show that, if an AI system is successfully appropriated by a human agent, a <i>hybrid</i> epistemic agent emerges, which can become both an epistemic expert and an authority. Consequently, we claim that the aforementioned hybrid agent is the appropriate object of a discourse around trust in AI and the epistemic obligations that stem from its epistemic superiority.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"14 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141566959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measure for Measure: Operationalising Cognitive Realism 度量衡:认知现实主义的操作化
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-05 DOI: 10.1007/s11023-024-09690-0
Majid D. Beni

This paper develops a measure of realism from within the framework of cognitive structural realism (CSR). It argues that in the context of CSR, realism can be operationalised in terms of balance between accuracy and generality. More specifically, the paper draws on the free energy principle to characterise the measure of realism in terms of the balance between accuracy and generality.

本文从认知结构现实主义(CSR)的框架出发,提出了现实主义的衡量标准。本文认为,在 CSR 的背景下,现实主义可以在准确性和概括性之间的平衡方面进行操作。更具体地说,本文借鉴自由能原理,从准确性和概括性之间的平衡角度来描述现实主义的衡量标准。
{"title":"Measure for Measure: Operationalising Cognitive Realism","authors":"Majid D. Beni","doi":"10.1007/s11023-024-09690-0","DOIUrl":"https://doi.org/10.1007/s11023-024-09690-0","url":null,"abstract":"<p>This paper develops a measure of realism from within the framework of cognitive structural realism (CSR). It argues that in the context of CSR, realism can be operationalised in terms of balance between accuracy and generality. More specifically, the paper draws on the free energy principle to characterise the measure of realism in terms of the balance between accuracy and generality.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"18 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141567032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unfairness in AI Anti-Corruption Tools: Main Drivers and Consequences 人工智能反腐败工具的不公正性:主要驱动因素和后果
IF 7.4 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-03 DOI: 10.1007/s11023-024-09688-8
Fernanda Odilla

This article discusses the potential sources and consequences of unfairness in artificial intelligence (AI) predictive tools used for anti-corruption efforts. Using the examples of three AI-based anti-corruption tools from Brazil—risk estimation of corrupt behaviour in public procurement, among public officials, and of female straw candidates in electoral contests—it illustrates how unfairness can emerge at the infrastructural, individual, and institutional levels. The article draws on interviews with law enforcement officials directly involved in the development of anti-corruption tools, as well as academic and grey literature, including official reports and dissertations on the tools used as examples. Potential sources of unfairness include problematic data, statistical learning issues, the personal values and beliefs of developers and users, and the governance and practices within the organisations in which these tools are created and deployed. The findings suggest that the tools analysed were trained using inputs from past anti-corruption procedures and practices and based on common sense assumptions about corruption, which are not necessarily free from unfair disproportionality and discrimination. In designing the ACTs, the developers did not reflect on the risks of unfairness, nor did they prioritise the use of specific technological solutions to identify and mitigate this type of problem. Although the tools analysed do not make automated decisions and only support human action, their algorithms are not open to external scrutiny.

本文讨论了用于反腐败工作的人工智能(AI)预测工具中不公平现象的潜在根源和后果。文章以巴西的三个基于人工智能的反腐工具为例--对公共采购中的腐败行为、公职人员中的腐败行为以及女性草根候选人在竞选中的腐败行为进行风险评估--说明了不公平是如何在基础设施、个人和机构层面出现的。文章借鉴了对直接参与反腐工具开发的执法官员的访谈,以及学术和灰色文献,包括官方报告和论文中用作范例的工具。不公平的潜在来源包括有问题的数据、统计学习问题、开发者和使用者的个人价值观和信仰,以及创建和部署这些工具的组织内部的管理和实践。研究结果表明,所分析的工具都是根据过去的反腐败程序和实践以及对腐败的常识假设进行培训的,而这些常识假设并不一定不存在不公平的不相称性和歧视。在设计反腐工具时,开发人员没有考虑到不公平的风险,也没有优先使用具体的技术解决方案来识别和缓解这类问题。尽管所分析的工具并不自动做出决定,而只是支持人的行动,但其算法并不接受外部审查。
{"title":"Unfairness in AI Anti-Corruption Tools: Main Drivers and Consequences","authors":"Fernanda Odilla","doi":"10.1007/s11023-024-09688-8","DOIUrl":"https://doi.org/10.1007/s11023-024-09688-8","url":null,"abstract":"<p>This article discusses the potential sources and consequences of unfairness in artificial intelligence (AI) predictive tools used for anti-corruption efforts. Using the examples of three AI-based anti-corruption tools from Brazil—risk estimation of corrupt behaviour in public procurement, among public officials, and of female straw candidates in electoral contests—it illustrates how unfairness can emerge at the infrastructural, individual, and institutional levels. The article draws on interviews with law enforcement officials directly involved in the development of anti-corruption tools, as well as academic and grey literature, including official reports and dissertations on the tools used as examples. Potential sources of unfairness include problematic data, statistical learning issues, the personal values and beliefs of developers and users, and the governance and practices within the organisations in which these tools are created and deployed. The findings suggest that the tools analysed were trained using inputs from past anti-corruption procedures and practices and based on common sense assumptions about corruption, which are not necessarily free from unfair disproportionality and discrimination. In designing the ACTs, the developers did not reflect on the risks of unfairness, nor did they prioritise the use of specific technological solutions to identify and mitigate this type of problem. Although the tools analysed do not make automated decisions and only support human action, their algorithms are not open to external scrutiny.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"6 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141547337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Minds and Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1