首页 > 最新文献

AI & Society最新文献

英文 中文
Magical thinking and the test of humanity: we have seen the danger of AI and it is us 神奇的思维和对人性的考验:我们已经看到了人工智能的危险,那就是我们自己
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-14 DOI: 10.1007/s00146-023-01775-1
David Morris
{"title":"Magical thinking and the test of humanity: we have seen the danger of AI and it is us","authors":"David Morris","doi":"10.1007/s00146-023-01775-1","DOIUrl":"10.1007/s00146-023-01775-1","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3047 - 3049"},"PeriodicalIF":2.9,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134912285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global governance and the normalization of artificial intelligence as ‘good’ for human health 全球治理和人工智能的正常化对人类健康“有益”
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-13 DOI: 10.1007/s00146-023-01774-2
Michael Strange, Jason Tucker

The term ‘artificial intelligence’ has arguably come to function in political discourse as, what Laclau called, an ‘empty signifier’. This article traces the shifting political discourse on AI within three key institutions of global governance–OHCHR, WHO, and UNESCO–and, in so doing, highlights the role of ‘crisis’ moments in justifying a series of pivotal re-articulations. Most important has been the attachment of AI to the narrative around digital automation in human healthcare. Greatly enabled by the societal context of the pandemic, all three institutions have moved from being critical of the unequal power relations in the economy of AI to, today, reframing themselves primarily as facilitators tasked with helping to ensure the application of AI technologies. The analysis identifies a shift in which human health and healthcare is framed as in a ‘crisis’ to which AI technology is presented as the remedy. The article argues the need to trace these discursive shifts as a means by which to understand, monitor, and where necessary also hold to account these changes in the governance of AI in society.

正如拉克劳所说,“人工智能”一词在政治话语中的作用可以说是一个“空的能指”。本文追溯了全球治理的三个关键机构——人权高专办、世卫组织和联合国教科文组织——关于人工智能的政治话语的转变,并在此过程中强调了“危机”时刻在证明一系列关键重新表述的合理性方面的作用。最重要的是,人工智能与人类医疗保健中数字自动化的叙述相关联。在大流行的社会背景下,这三个机构都从批评人工智能经济中的不平等权力关系,转变为今天将自己主要重新定位为负责帮助确保人工智能技术应用的促进者。该分析确定了一种转变,即人类健康和医疗保健被视为一场“危机”,而人工智能技术被视为补救措施。这篇文章认为,有必要追踪这些话语的转变,以此作为一种理解、监控的手段,并在必要时也考虑到人工智能在社会治理中的这些变化。
{"title":"Global governance and the normalization of artificial intelligence as ‘good’ for human health","authors":"Michael Strange,&nbsp;Jason Tucker","doi":"10.1007/s00146-023-01774-2","DOIUrl":"10.1007/s00146-023-01774-2","url":null,"abstract":"<div><p>The term ‘artificial intelligence’ has arguably come to function in political discourse as, what Laclau called, an ‘empty signifier’. This article traces the shifting political discourse on AI within three key institutions of global governance–OHCHR, WHO, and UNESCO–and, in so doing, highlights the role of ‘crisis’ moments in justifying a series of pivotal re-articulations. Most important has been the attachment of AI to the narrative around digital automation in human healthcare. Greatly enabled by the societal context of the pandemic, all three institutions have moved from being critical of the unequal power relations in the economy of AI to, today, reframing themselves primarily as facilitators tasked with helping to ensure the application of AI technologies. The analysis identifies a shift in which human health and healthcare is framed as in a ‘crisis’ to which AI technology is presented as the remedy. The article argues the need to trace these discursive shifts as a means by which to understand, monitor, and where necessary also hold to account these changes in the governance of AI in society.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2667 - 2676"},"PeriodicalIF":2.9,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01774-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135786044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sculpting the social algorithm for radical futurity 为激进的未来塑造社会算法
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-13 DOI: 10.1007/s00146-023-01760-8
Anisa Matthews
{"title":"Sculpting the social algorithm for radical futurity","authors":"Anisa Matthews","doi":"10.1007/s00146-023-01760-8","DOIUrl":"https://doi.org/10.1007/s00146-023-01760-8","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135735182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital sovereignty, digital infrastructures, and quantum horizons 数字主权、数字基础设施和量子视界
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-13 DOI: 10.1007/s00146-023-01729-7
Geoff Gordon
Abstract This article holds that governmental investments in quantum technologies speak to the imaginable futures of digital sovereignty and digital infrastructures, two major areas of change driven by related technologies like AI and Big Data, among other things, in international law today. Under intense development today for future interpolation into digital systems that they may alter, quantum technologies occupy a sort of liminal position, rooted in existing assemblages of computational technologies while pointing to new horizons for them. The possibilities they raise are neither certain nor determinate, but active investments in them (legal, political and material investments) offer perspective on digital technology-driven influences on an international legal imagination. In contributing to visions of the future that are guiding ambitions for digital sovereignty and digital infrastructures, quantum technologies condition digital technology-driven changes to international law and legal imagination in the present. Privileging observation and description, I adapt and utilize a diffractive method with the aim to discern what emerges out of the interference among the several related things assembled for this article, including material technologies and legal institutions. In conclusion, I observe ambivalent changes to an international legal imagination, changes which promise transformation but appear nonetheless to reproduce current distributions of power and resources.
本文认为,政府对量子技术的投资与数字主权和数字基础设施的可想象未来有关,这是当今国际法中由人工智能和大数据等相关技术驱动的两个主要变革领域。在今天的激烈发展中,未来的数字系统可能会改变,量子技术占据了一种有限的地位,植根于现有的计算技术组合,同时为它们指出了新的视野。它们带来的可能性既不确定也不确定,但对它们的积极投资(法律、政治和物质投资)提供了数字技术驱动对国际法律想象的影响的视角。量子技术为指导数字主权和数字基础设施雄心的未来愿景做出了贡献,同时也为当前由数字技术驱动的国际法和法律想象的变化提供了条件。通过观察和描述,我采用了一种衍射的方法,目的是辨别出在本文所汇集的几个相关事物(包括材料技术和法律制度)之间的干扰中出现了什么。总之,我观察到国际法律想象的矛盾变化,这些变化承诺变革,但似乎再现了当前的权力和资源分配。
{"title":"Digital sovereignty, digital infrastructures, and quantum horizons","authors":"Geoff Gordon","doi":"10.1007/s00146-023-01729-7","DOIUrl":"https://doi.org/10.1007/s00146-023-01729-7","url":null,"abstract":"Abstract This article holds that governmental investments in quantum technologies speak to the imaginable futures of digital sovereignty and digital infrastructures, two major areas of change driven by related technologies like AI and Big Data, among other things, in international law today. Under intense development today for future interpolation into digital systems that they may alter, quantum technologies occupy a sort of liminal position, rooted in existing assemblages of computational technologies while pointing to new horizons for them. The possibilities they raise are neither certain nor determinate, but active investments in them (legal, political and material investments) offer perspective on digital technology-driven influences on an international legal imagination. In contributing to visions of the future that are guiding ambitions for digital sovereignty and digital infrastructures, quantum technologies condition digital technology-driven changes to international law and legal imagination in the present. Privileging observation and description, I adapt and utilize a diffractive method with the aim to discern what emerges out of the interference among the several related things assembled for this article, including material technologies and legal institutions. In conclusion, I observe ambivalent changes to an international legal imagination, changes which promise transformation but appear nonetheless to reproduce current distributions of power and resources.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134990418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reimagining Benin Bronzes using generative adversarial networks 利用生成对抗网络重新想象贝宁青铜器
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-12 DOI: 10.1007/s00146-023-01761-7
Minne Atairu
{"title":"Reimagining Benin Bronzes using generative adversarial networks","authors":"Minne Atairu","doi":"10.1007/s00146-023-01761-7","DOIUrl":"https://doi.org/10.1007/s00146-023-01761-7","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135826903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social context of the issue of discriminatory algorithmic decision-making systems 社会背景下歧视性算法决策系统的问题
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-09 DOI: 10.1007/s00146-023-01741-x
Daniel Varona, Juan Luis Suarez

Algorithmic decision-making systems have the potential to amplify existing discriminatory patterns and negatively affect perceptions of justice in society. There is a need for a revision of mechanisms to address discrimination in light of the unique challenges presented by these systems, which are not easily auditable or explainable. Research efforts to bring fairness to ADM solutions should be viewed as a matter of justice and trust among actors should be ensured through technology design. Ideas that move us to explore the notions of justice within the field of political thinking, aiming to identify the elements that frame the social context of the discriminatory decisions produced by algorithmic decision-making solutions. Our explorations suggest that the efforts to bring fairness to ADMs should be seen as a matter of justice and that trust in these systems can be ensured through careful technology design.

算法决策系统有可能放大现有的歧视模式,并对社会对正义的看法产生负面影响。鉴于这些制度带来的独特挑战,不容易审计或解释,有必要修订解决歧视问题的机制。为ADM解决方案带来公平性的研究工作应被视为公正问题,并应通过技术设计确保参与者之间的信任。这些想法促使我们在政治思维领域探索正义的概念,旨在确定构成由算法决策解决方案产生的歧视性决策的社会背景的要素。我们的探索表明,为adm带来公平的努力应被视为正义问题,并且可以通过精心的技术设计来确保对这些系统的信任。
{"title":"Social context of the issue of discriminatory algorithmic decision-making systems","authors":"Daniel Varona,&nbsp;Juan Luis Suarez","doi":"10.1007/s00146-023-01741-x","DOIUrl":"10.1007/s00146-023-01741-x","url":null,"abstract":"<div><p>Algorithmic decision-making systems have the potential to amplify existing discriminatory patterns and negatively affect perceptions of justice in society. There is a need for a revision of mechanisms to address discrimination in light of the unique challenges presented by these systems, which are not easily auditable or explainable. Research efforts to bring fairness to ADM solutions should be viewed as a matter of justice and trust among actors should be ensured through technology design. Ideas that move us to explore the notions of justice within the field of political thinking, aiming to identify the elements that frame the social context of the discriminatory decisions produced by algorithmic decision-making solutions. Our explorations suggest that the efforts to bring fairness to ADMs should be seen as a matter of justice and that trust in these systems can be ensured through careful technology design.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2799 - 2811"},"PeriodicalIF":2.9,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136107595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI and human labor: who is replaceable? 生成式人工智能与人类劳动:谁是可替代的?
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-08 DOI: 10.1007/s00146-023-01773-3
Syed AbuMusab
{"title":"Generative AI and human labor: who is replaceable?","authors":"Syed AbuMusab","doi":"10.1007/s00146-023-01773-3","DOIUrl":"10.1007/s00146-023-01773-3","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"3051 - 3053"},"PeriodicalIF":2.9,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine and human agents in moral dilemmas: automation–autonomic and EEG effect 道德困境中的机器和人类代理人:自动化-自主和脑电图效应
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-06 DOI: 10.1007/s00146-023-01772-4
Federico Cassioli, Laura Angioletti, Michela Balconi

Automation is inherently tied to ethical challenges because of its potential involvement in morally loaded decisions. In the present research, participants (n = 34) took part in a moral multi-trial dilemma-based task where the agent (human vs. machine) and the behavior (action vs. inaction) factors were randomized. Self-report measures, in terms of morality, consciousness, responsibility, intentionality, and emotional impact evaluation were gathered, together with electroencephalography (delta, theta, beta, upper and lower alpha, and gamma powers) and peripheral autonomic (electrodermal activity, heart rate variability) data. Data showed that moral schemata vary as a function of the involved decider, and when the agent and behavior factors are crossed. Subjects did not consider machines full moral deciders to the same degree as humans and tend to morally better accept human active behavior and machine inaction. Moreover, the autonomic physiological activity might support the a-posteriori moral evaluation. In the evaluation of the agent’s consciousness, a beta ventrolateral prefrontal synchronization was detected for human action and machine inaction, while a generalized gamma synchronization occurred in artificial agent trials while rating the emotional impact of the decider’s behavior. The detected differences might point to a potential explicit and implicit asymmetry in moral reasoning toward artificial and human agents.

自动化本质上与道德挑战有关,因为它可能涉及道德负载的决策。在本研究中,参与者(n = 34)参加了一个基于道德的多试验困境任务,其中代理人(人vs.机器)和行为(行动vs.不作为)因素是随机的。自我报告测量,包括道德、意识、责任、意向性和情绪影响评估,以及脑电图(δ、θ、β、上、下α和γ功率)和外周自主神经(皮电活动、心率变异性)数据。数据显示,道德图式随着决策者的参与而变化,当代理人和行为因素交叉时也会发生变化。受试者认为机器不像人类那样是完全的道德决策者,他们更倾向于在道德上接受人类的主动行为和机器的不作为。此外,自主神经生理活动可能支持事后道德评价。在评估智能体的意识时,在人类行为和机器不作为的情况下检测到β腹外侧前额叶同步,而在评估决策者行为的情绪影响时,在人工智能体试验中出现了广义的伽马同步。检测到的差异可能表明,在对人工和人类行为者的道德推理中,存在潜在的显性和隐性不对称。
{"title":"Machine and human agents in moral dilemmas: automation–autonomic and EEG effect","authors":"Federico Cassioli,&nbsp;Laura Angioletti,&nbsp;Michela Balconi","doi":"10.1007/s00146-023-01772-4","DOIUrl":"10.1007/s00146-023-01772-4","url":null,"abstract":"<div><p>Automation is inherently tied to ethical challenges because of its potential involvement in morally loaded decisions. In the present research, participants (<i>n</i> = 34) took part in a moral multi-trial dilemma-based task where the agent (human vs. machine) and the behavior (action vs. inaction) factors were randomized. Self-report measures, in terms of morality, consciousness, responsibility, intentionality, and emotional impact evaluation were gathered, together with electroencephalography (delta, theta, beta, upper and lower alpha, and gamma powers) and peripheral autonomic (electrodermal activity, heart rate variability) data. Data showed that moral schemata vary as a function of the involved decider, and when the agent and behavior factors are crossed. Subjects did not consider machines full moral deciders to the same degree as humans and tend to morally better accept human active behavior and machine inaction. Moreover, the autonomic physiological activity might support the a-posteriori moral evaluation. In the evaluation of the agent’s consciousness, a beta ventrolateral prefrontal synchronization was detected for human action and machine inaction, while a generalized gamma synchronization occurred in artificial agent trials while rating the emotional impact of the decider’s behavior. The detected differences might point to a potential explicit and implicit asymmetry in moral reasoning toward artificial and human agents.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2677 - 2689"},"PeriodicalIF":2.9,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130093546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The latent space of data ethics 数据伦理的潜在空间
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-04 DOI: 10.1007/s00146-023-01757-3
Enrico Panai

In informationally mature societies, almost all organisations record, generate, process, use, share and disseminate data. In particular, the rise of AI and autonomous systems has corresponded to an improvement in computational power and in solving complex problems. However, the resulting possibilities have been coupled with an upsurge of ethical risks. To avoid the misuse, underuse, and harmful use of data and data-based systems like AI, we should use an ethical framework appropriate to the object of its reasoning. Unfortunately, in recent years, the space for data-related ethics has not been precisely defined in organisations. As a consequence, there has been an overlapping of responsibilities and a void of clear accountabilities. Ethical issues have, therefore, been dealt with using inadequate levels of abstraction (e.g. legal, technical). Yet, if building an ethical infrastructure requires the collaboration of each body, addressing ethical issues related to data requires leaving room for the appropriate level of abstraction. This paper first aims to show how the space of data ethics is already latent in organisations. It then highlights how to redefine roles (chief data ethics officer, data ethics committee, etc.) and codes (code of data ethics) to create and maintain an environment where ethical reasoning about data, information, and AI systems may flourish.

在信息成熟的社会中,几乎所有组织都会记录、生成、处理、使用、共享和传播数据。特别是,人工智能和自主系统的兴起与计算能力和解决复杂问题的能力的提高相对应。然而,随之而来的可能性伴随着道德风险的高涨。为了避免像人工智能这样的数据和基于数据的系统的误用、不充分使用和有害使用,我们应该使用适合其推理对象的道德框架。不幸的是,近年来,组织中与数据相关的道德空间并没有得到精确的定义。其结果是,责任重叠,缺乏明确的责任。因此,伦理问题的处理使用了不适当的抽象层次(如法律、技术)。然而,如果构建道德基础设施需要每个机构的协作,那么解决与数据相关的道德问题就需要为适当的抽象级别留出空间。本文首先旨在展示数据伦理的空间是如何在组织中潜伏的。然后重点介绍了如何重新定义角色(首席数据道德官、数据道德委员会等)和代码(数据道德准则),以创建和维护一个关于数据、信息和人工智能系统的道德推理可能蓬勃发展的环境。
{"title":"The latent space of data ethics","authors":"Enrico Panai","doi":"10.1007/s00146-023-01757-3","DOIUrl":"10.1007/s00146-023-01757-3","url":null,"abstract":"<div><p>In informationally mature societies, almost all organisations record, generate, process, use, share and disseminate data. In particular, the rise of AI and autonomous systems has corresponded to an improvement in computational power and in solving complex problems. However, the resulting possibilities have been coupled with an upsurge of ethical risks. To avoid the misuse, underuse, and harmful use of data and data-based systems like AI, we should use an ethical framework appropriate to the object of its reasoning. Unfortunately, in recent years, the space for data-related ethics has not been precisely defined in organisations. As a consequence, there has been an overlapping of responsibilities and a void of clear accountabilities. Ethical issues have, therefore, been dealt with using inadequate levels of abstraction (e.g. legal, technical). Yet, if building an ethical infrastructure requires the collaboration of each body, addressing ethical issues related to data requires leaving room for the appropriate level of abstraction. This paper first aims to show how the space of data ethics is already latent in organisations. It then highlights how to redefine roles (chief data ethics officer, data ethics committee, etc.) and codes (code of data ethics) to create and maintain an environment where ethical reasoning about data, information, and AI systems may flourish.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2647 - 2665"},"PeriodicalIF":2.9,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133752561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Humanities and social sciences (HSS) and the challenges posed by AI: a French point of view 人文社会科学(HSS)和人工智能带来的挑战:法国人的观点
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-01 DOI: 10.1007/s00146-023-01746-6
Laurent Petit

The humanities and social sciences (HSS) are being turned upside down by advances in artificial intelligence (AI), and their very existence could be threatened. These sciences are being profoundly destabilised by a dual process of naturalisation of social phenomena and fetishisation of numbers, accentuated by the development of AI (part 1). Both STM (science, technology, medicine) and HSS are facing major epistemological challenges, but for the latter they carry the risk of marginalisation (part 2). The humanities and social sciences remain the best equipped to question the social construct represented by the development of AI. However, this essential approach is not enough. We need to ask ourselves: how can the HSS reintroduce interpretation when they have less and less control over how data is put together? Only a balanced partnership between STM and HSS is likely to meet all these challenges (part 3). Using the case of education, which has long been at the forefront of developments in other sectors of social life, we would like to show how and on what priority issues such a partnership can be built (part 4).

由于人工智能(AI)的进步,人文社会科学(HSS)正在发生天翻地覆的变化,它们的存在可能受到威胁。这些科学正在被社会现象的自然化和数字的拜物化的双重过程所深刻地破坏,而人工智能的发展则加剧了这一过程(第一部分)。STM(科学、技术、医学)和HSS都面临着重大的认识论挑战,但对于后者来说,它们面临着边缘化的风险(第2部分)。人文科学和社会科学仍然是质疑人工智能发展所代表的社会结构的最佳装备。然而,这种基本方法是不够的。我们需要问自己:当HSS对数据如何组合的控制越来越少时,他们如何重新引入解释?只有在STM和HSS之间建立平衡的伙伴关系才有可能应对所有这些挑战(第3部分)。以教育为例,它长期处于社会生活其他领域发展的前沿,我们想展示如何以及在哪些优先问题上建立这样的伙伴关系(第4部分)。
{"title":"Humanities and social sciences (HSS) and the challenges posed by AI: a French point of view","authors":"Laurent Petit","doi":"10.1007/s00146-023-01746-6","DOIUrl":"10.1007/s00146-023-01746-6","url":null,"abstract":"<div><p>The humanities and social sciences (HSS) are being turned upside down by advances in artificial intelligence (AI), and their very existence could be threatened. These sciences are being profoundly destabilised by a dual process of naturalisation of social phenomena and fetishisation of numbers, accentuated by the development of AI (part 1). Both STM (science, technology, medicine) and HSS are facing major epistemological challenges, but for the latter they carry the risk of marginalisation (part 2). The humanities and social sciences remain the best equipped to question the social construct represented by the development of AI. However, this essential approach is not enough. We need to ask ourselves: how can the HSS reintroduce interpretation when they have less and less control over how data is put together? Only a balanced partnership between STM and HSS is likely to meet all these challenges (part 3). Using the case of education, which has long been at the forefront of developments in other sectors of social life, we would like to show how and on what priority issues such a partnership can be built (part 4).</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 6","pages":"2791 - 2797"},"PeriodicalIF":2.9,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127622013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1