首页 > 最新文献

AI & Society最新文献

英文 中文
Global governance and the normalization of artificial intelligence as ‘good’ for human health 全球治理和人工智能的正常化对人类健康“有益”
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-13 DOI: 10.1007/s00146-023-01774-2
Michael Strange, Jason Tucker
Abstract The term ‘artificial intelligence’ has arguably come to function in political discourse as, what Laclau called, an ‘empty signifier’. This article traces the shifting political discourse on AI within three key institutions of global governance–OHCHR, WHO, and UNESCO–and, in so doing, highlights the role of ‘crisis’ moments in justifying a series of pivotal re-articulations. Most important has been the attachment of AI to the narrative around digital automation in human healthcare. Greatly enabled by the societal context of the pandemic, all three institutions have moved from being critical of the unequal power relations in the economy of AI to, today, reframing themselves primarily as facilitators tasked with helping to ensure the application of AI technologies. The analysis identifies a shift in which human health and healthcare is framed as in a ‘crisis’ to which AI technology is presented as the remedy. The article argues the need to trace these discursive shifts as a means by which to understand, monitor, and where necessary also hold to account these changes in the governance of AI in society.
“人工智能”一词在政治话语中的作用,正如拉克劳所说,是一个“空的能指”。本文追溯了全球治理的三个关键机构——人权高专办、世卫组织和联合国教科文组织——关于人工智能的政治话语的转变,并在此过程中强调了“危机”时刻在证明一系列关键重新表述的合理性方面的作用。最重要的是,人工智能与人类医疗保健中数字自动化的叙述相关联。在大流行的社会背景下,这三个机构都从批评人工智能经济中的不平等权力关系,转变为今天将自己主要重新定位为负责帮助确保人工智能技术应用的促进者。该分析确定了一种转变,即人类健康和医疗保健被视为一场“危机”,而人工智能技术被视为补救措施。这篇文章认为,有必要追踪这些话语的转变,以此作为一种理解、监控的手段,并在必要时也考虑到人工智能在社会治理中的这些变化。
{"title":"Global governance and the normalization of artificial intelligence as ‘good’ for human health","authors":"Michael Strange, Jason Tucker","doi":"10.1007/s00146-023-01774-2","DOIUrl":"https://doi.org/10.1007/s00146-023-01774-2","url":null,"abstract":"Abstract The term ‘artificial intelligence’ has arguably come to function in political discourse as, what Laclau called, an ‘empty signifier’. This article traces the shifting political discourse on AI within three key institutions of global governance–OHCHR, WHO, and UNESCO–and, in so doing, highlights the role of ‘crisis’ moments in justifying a series of pivotal re-articulations. Most important has been the attachment of AI to the narrative around digital automation in human healthcare. Greatly enabled by the societal context of the pandemic, all three institutions have moved from being critical of the unequal power relations in the economy of AI to, today, reframing themselves primarily as facilitators tasked with helping to ensure the application of AI technologies. The analysis identifies a shift in which human health and healthcare is framed as in a ‘crisis’ to which AI technology is presented as the remedy. The article argues the need to trace these discursive shifts as a means by which to understand, monitor, and where necessary also hold to account these changes in the governance of AI in society.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135786044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sculpting the social algorithm for radical futurity 为激进的未来塑造社会算法
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-13 DOI: 10.1007/s00146-023-01760-8
Anisa Matthews
{"title":"Sculpting the social algorithm for radical futurity","authors":"Anisa Matthews","doi":"10.1007/s00146-023-01760-8","DOIUrl":"https://doi.org/10.1007/s00146-023-01760-8","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135735182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital sovereignty, digital infrastructures, and quantum horizons 数字主权、数字基础设施和量子视界
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-13 DOI: 10.1007/s00146-023-01729-7
Geoff Gordon
Abstract This article holds that governmental investments in quantum technologies speak to the imaginable futures of digital sovereignty and digital infrastructures, two major areas of change driven by related technologies like AI and Big Data, among other things, in international law today. Under intense development today for future interpolation into digital systems that they may alter, quantum technologies occupy a sort of liminal position, rooted in existing assemblages of computational technologies while pointing to new horizons for them. The possibilities they raise are neither certain nor determinate, but active investments in them (legal, political and material investments) offer perspective on digital technology-driven influences on an international legal imagination. In contributing to visions of the future that are guiding ambitions for digital sovereignty and digital infrastructures, quantum technologies condition digital technology-driven changes to international law and legal imagination in the present. Privileging observation and description, I adapt and utilize a diffractive method with the aim to discern what emerges out of the interference among the several related things assembled for this article, including material technologies and legal institutions. In conclusion, I observe ambivalent changes to an international legal imagination, changes which promise transformation but appear nonetheless to reproduce current distributions of power and resources.
本文认为,政府对量子技术的投资与数字主权和数字基础设施的可想象未来有关,这是当今国际法中由人工智能和大数据等相关技术驱动的两个主要变革领域。在今天的激烈发展中,未来的数字系统可能会改变,量子技术占据了一种有限的地位,植根于现有的计算技术组合,同时为它们指出了新的视野。它们带来的可能性既不确定也不确定,但对它们的积极投资(法律、政治和物质投资)提供了数字技术驱动对国际法律想象的影响的视角。量子技术为指导数字主权和数字基础设施雄心的未来愿景做出了贡献,同时也为当前由数字技术驱动的国际法和法律想象的变化提供了条件。通过观察和描述,我采用了一种衍射的方法,目的是辨别出在本文所汇集的几个相关事物(包括材料技术和法律制度)之间的干扰中出现了什么。总之,我观察到国际法律想象的矛盾变化,这些变化承诺变革,但似乎再现了当前的权力和资源分配。
{"title":"Digital sovereignty, digital infrastructures, and quantum horizons","authors":"Geoff Gordon","doi":"10.1007/s00146-023-01729-7","DOIUrl":"https://doi.org/10.1007/s00146-023-01729-7","url":null,"abstract":"Abstract This article holds that governmental investments in quantum technologies speak to the imaginable futures of digital sovereignty and digital infrastructures, two major areas of change driven by related technologies like AI and Big Data, among other things, in international law today. Under intense development today for future interpolation into digital systems that they may alter, quantum technologies occupy a sort of liminal position, rooted in existing assemblages of computational technologies while pointing to new horizons for them. The possibilities they raise are neither certain nor determinate, but active investments in them (legal, political and material investments) offer perspective on digital technology-driven influences on an international legal imagination. In contributing to visions of the future that are guiding ambitions for digital sovereignty and digital infrastructures, quantum technologies condition digital technology-driven changes to international law and legal imagination in the present. Privileging observation and description, I adapt and utilize a diffractive method with the aim to discern what emerges out of the interference among the several related things assembled for this article, including material technologies and legal institutions. In conclusion, I observe ambivalent changes to an international legal imagination, changes which promise transformation but appear nonetheless to reproduce current distributions of power and resources.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134990418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reimagining Benin Bronzes using generative adversarial networks 利用生成对抗网络重新想象贝宁青铜器
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-12 DOI: 10.1007/s00146-023-01761-7
Minne Atairu
{"title":"Reimagining Benin Bronzes using generative adversarial networks","authors":"Minne Atairu","doi":"10.1007/s00146-023-01761-7","DOIUrl":"https://doi.org/10.1007/s00146-023-01761-7","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135826903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social context of the issue of discriminatory algorithmic decision-making systems 社会背景下歧视性算法决策系统的问题
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-09-09 DOI: 10.1007/s00146-023-01741-x
Daniel Varona, Juan Luis Suarez
{"title":"Social context of the issue of discriminatory algorithmic decision-making systems","authors":"Daniel Varona, Juan Luis Suarez","doi":"10.1007/s00146-023-01741-x","DOIUrl":"https://doi.org/10.1007/s00146-023-01741-x","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136107595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Truth machines: synthesizing veracity in AI language models 真值机:人工智能语言模型的真实性合成
Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-28 DOI: 10.1007/s00146-023-01756-4
Luke Munn, Liam Magee, Vanicka Arora
Abstract As AI technologies are rolled out into healthcare, academia, human resources, law, and a multitude of other domains, they become de-facto arbiters of truth. But truth is highly contested, with many different definitions and approaches. This article discusses the struggle for truth in AI systems and the general responses to date. It then investigates the production of truth in InstructGPT, a large language model, highlighting how data harvesting, model architectures, and social feedback mechanisms weave together disparate understandings of veracity. It conceptualizes this performance as an operationalization of truth , where distinct, often-conflicting claims are smoothly synthesized and confidently presented into truth-statements. We argue that these same logics and inconsistencies play out in Instruct’s successor, ChatGPT, reiterating truth as a non-trivial problem. We suggest that enriching sociality and thickening “reality” are two promising vectors for enhancing the truth-evaluating capacities of future language models. We conclude, however, by stepping back to consider AI truth-telling as a social practice: what kind of “truth” do we as listeners desire?
随着人工智能技术被推广到医疗保健、学术界、人力资源、法律和许多其他领域,它们成为事实的仲裁者。但真理是有争议的,有许多不同的定义和方法。本文讨论了人工智能系统中对真理的斗争以及迄今为止的一般反应。然后研究了InstructGPT(一个大型语言模型)中真理的产生,强调了数据收集、模型架构和社会反馈机制如何将对准确性的不同理解交织在一起。它将这种表现概念化为真理的操作化,其中不同的,经常相互冲突的主张被顺利地综合并自信地呈现为真理陈述。我们认为,这些相同的逻辑和不一致在指令的继任者ChatGPT中发挥作用,重申真理是一个非平凡的问题。我们认为,丰富社会性和增厚“现实”是增强未来语言模型真实性评估能力的两个有希望的向量。然而,我们的结论是,退一步考虑人工智能讲真话作为一种社会实践:作为听众,我们想要什么样的“真相”?
{"title":"Truth machines: synthesizing veracity in AI language models","authors":"Luke Munn, Liam Magee, Vanicka Arora","doi":"10.1007/s00146-023-01756-4","DOIUrl":"https://doi.org/10.1007/s00146-023-01756-4","url":null,"abstract":"Abstract As AI technologies are rolled out into healthcare, academia, human resources, law, and a multitude of other domains, they become de-facto arbiters of truth. But truth is highly contested, with many different definitions and approaches. This article discusses the struggle for truth in AI systems and the general responses to date. It then investigates the production of truth in InstructGPT, a large language model, highlighting how data harvesting, model architectures, and social feedback mechanisms weave together disparate understandings of veracity. It conceptualizes this performance as an operationalization of truth , where distinct, often-conflicting claims are smoothly synthesized and confidently presented into truth-statements. We argue that these same logics and inconsistencies play out in Instruct’s successor, ChatGPT, reiterating truth as a non-trivial problem. We suggest that enriching sociality and thickening “reality” are two promising vectors for enhancing the truth-evaluating capacities of future language models. We conclude, however, by stepping back to consider AI truth-telling as a social practice: what kind of “truth” do we as listeners desire?","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136349278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
From the eco-calypse to the infocalypse: the importance of building a new culture for protecting the infosphere 从生态启示录到信息启示录:建立保护信息圈的新文化的重要性
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-26 DOI: 10.1007/s00146-023-01737-7
Manh-Tung Ho, Hong-Kong To Nguyen

In our ever technologically driven and mediatized society, we face the existential risk of falling into an info-calypse as much as an eco-calypse. To complement the list of values of a progressive culture put forth by Harrison (Natl Interest 60:55–65, 2000) and Vuong (Econ Bus Lett 10(3):284–290, 2021), this short essay proposes cultivating a new cultural value of protecting the infosphere. It argues rewarding practices and products that strengthen the integrity of infosphere as part of the newly emerged corporate social responsibility (CSR) practices are highly beneficial for the fight against contaminations of the infosphere, i.e., misinformation, disinformation, damaging contents, etc.

在我们这个日益技术化和媒体化的社会中,我们面临着陷入信息启示录和生态启示录的生存风险。作为对 Harrison(Natl Interest 60:55-65,2000 年)和 Vuong(Econ Bus Lett 10(3):284-290,2021 年)提出的进步文化价值观清单的补充,这篇短文建议培养一种保护信息圈的新文化价值观。本文认为,作为新出现的企业社会责任(CSR)实践的一部分,对加强信息圈完整性的实践和产品进行奖励,对打击信息圈污染(即错误信息、虚假信息、破坏性内容等)大有裨益。
{"title":"From the eco-calypse to the infocalypse: the importance of building a new culture for protecting the infosphere","authors":"Manh-Tung Ho,&nbsp;Hong-Kong To Nguyen","doi":"10.1007/s00146-023-01737-7","DOIUrl":"10.1007/s00146-023-01737-7","url":null,"abstract":"<div><p>In our ever technologically driven and mediatized society, we face the existential risk of falling into an info-calypse as much as an eco-calypse. To complement the list of values of a progressive culture put forth by Harrison (Natl Interest 60:55–65, 2000) and Vuong (Econ Bus Lett 10(3):284–290, 2021), this short essay proposes cultivating a new cultural value of protecting the infosphere. It argues rewarding practices and products that strengthen the integrity of infosphere as part of the newly emerged corporate social responsibility (CSR) practices are highly beneficial for the fight against contaminations of the infosphere, i.e., misinformation, disinformation, damaging contents, etc.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2611 - 2613"},"PeriodicalIF":2.9,"publicationDate":"2023-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131545430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The privacy dependency thesis and self-defense 隐私依赖论和自卫
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-19 DOI: 10.1007/s00146-023-01734-w
Lauritz Aastrup Munch, Jakob Thrane Mainz

If I decide to disclose information about myself, this act may undermine other people’s ability to conceal information about them. Such dependencies are called privacy dependencies in the literature. Some say that privacy dependencies generate moral duties to avoid sharing information about oneself. If true, we argue, then it is sometimes justified for others to impose harm on the person sharing information to prevent them from doing so. In this paper, we first show how such conclusions arise. Next, we show that the existence of such a dependency between the moral significance you are inclined to attribute to privacy dependencies and judgments about permissible self-defense puts pressure on at least some ways of spelling out the idea that privacy dependencies ought to constrain our data-sharing conduct.

如果我决定公开自己的信息,这一行为可能会削弱其他人隐藏自己信息的能力。这种依赖性在文献中被称为隐私依赖性。有人说,隐私依赖产生了避免分享自己信息的道德责任。我们认为,如果这种观点成立,那么他人有时就有理由对分享信息的人施加伤害,以阻止他们这样做。在本文中,我们首先展示了这种结论是如何产生的。接下来,我们将证明,在你倾向于赋予隐私依赖性的道德意义与对允许的自卫的判断之间存在的这种依赖性,至少对阐明隐私依赖性应该约束我们的数据共享行为这一观点的某些方式造成了压力。
{"title":"The privacy dependency thesis and self-defense","authors":"Lauritz Aastrup Munch,&nbsp;Jakob Thrane Mainz","doi":"10.1007/s00146-023-01734-w","DOIUrl":"10.1007/s00146-023-01734-w","url":null,"abstract":"<div><p>If <i>I</i> decide to disclose information about myself, this act may undermine <i>other</i> people’s ability to conceal information about them. Such dependencies are called privacy dependencies in the literature. Some say that privacy dependencies generate moral duties to avoid sharing information about oneself. If true, we argue, then it is sometimes justified for others to impose harm on the person sharing information to prevent them from doing so. In this paper, we first show how such conclusions arise. Next, we show that the existence of such a dependency between the moral significance you are inclined to attribute to privacy dependencies and judgments about permissible self-defense puts pressure on at least some ways of spelling out the idea that privacy dependencies ought to constrain our data-sharing conduct.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2525 - 2535"},"PeriodicalIF":2.9,"publicationDate":"2023-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01734-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117098614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coverage of well-being within artificial intelligence, machine learning and robotics academic literature: the case of disabled people 人工智能、机器学习和机器人学术文献中的福祉内容:残疾人案例
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-16 DOI: 10.1007/s00146-023-01735-9
Aspen Lillywhite, Gregor Wolbring

Well-being is an important policy concept including in discussions around the use of artificial intelligence, machine learning and robotics. Disabled people experience challenges in their well-being. Therefore, the aim of our scoping review study of academic abstracts employing Scopus, IEEE Xplore, Compendex and the 70 databases from EBSCO-HOST as sources was to better understand how academic literature focusing on AI/ML/robotics engages with well-being in relation to disabled people. Our objective was to answer the following research question: how and to what extent does the AI/ML/robot literature we covered include well-being in relation to disabled people? We found 2071 academic abstracts covering AI/ML and well-being, and 1055 covering robotics and well-being. Within these abstracts, only 39 covered AI/ML and 48 robotics and well-being in relation to disabled people. The tone of the coverage was techno-positive and techno-optimistic arguing that AI/ML/robotics could improve the well-being of disabled people in general or improve well-being by helping disabled people overcome their ‘disability’ or make tasks easier. No negative effects that AI/ML/robotics could have or has had on the well-being of disabled people were mentioned. Disabled people were portrayed only within patient, client, or user roles but not in their roles as stakeholders in the governance of AI/ML/robotics discussions. This biased and limited coverage of the impact of AI/ML/robotics on the well-being of disabled people disempowers disabled people.

福祉是一个重要的政策概念,包括在围绕使用人工智能、机器学习和机器人技术的讨论中。残疾人在福祉方面面临挑战。因此,我们以 Scopus、IEEE Xplore、Compendex 和 EBSCO-HOST 的 70 个数据库为来源,对学术摘要进行了范围审查研究,目的是更好地了解关注人工智能/机器学习/机器人的学术文献是如何涉及与残疾人相关的福祉问题的。我们的目标是回答以下研究问题:我们所涵盖的人工智能/多媒体/机器人文献如何以及在多大程度上包含了与残疾人相关的福祉?我们发现有 2071 篇学术论文摘要涉及人工智能/ML 与福祉,1055 篇涉及机器人与福祉。在这些摘要中,只有 39 篇和 48 篇分别涉及人工智能/ML 和机器人与残疾人福祉。报道的基调是技术上的积极和技术上的乐观,认为人工智能/ML/机器人可以改善残疾人的整体福祉,或通过帮助残疾人克服 "残疾 "或使任务变得更容易来改善福祉。没有人提到人工智能/ML/机器人对残疾人福祉可能或已经产生的负面影响。残疾人只被描述为病人、客户或用户的角色,而没有被描述为人工智能/ML/机器人讨论管理中的利益相关者的角色。对人工智能/人工智能/机器人对残疾人福祉的影响的这种偏颇和有限的报道剥夺了残疾人的权利。
{"title":"Coverage of well-being within artificial intelligence, machine learning and robotics academic literature: the case of disabled people","authors":"Aspen Lillywhite,&nbsp;Gregor Wolbring","doi":"10.1007/s00146-023-01735-9","DOIUrl":"10.1007/s00146-023-01735-9","url":null,"abstract":"<div><p>Well-being is an important policy concept including in discussions around the use of artificial intelligence, machine learning and robotics. Disabled people experience challenges in their well-being. Therefore, the aim of our scoping review study of academic abstracts employing Scopus, IEEE Xplore, Compendex and the 70 databases from EBSCO-HOST as sources was to better understand how academic literature focusing on AI/ML/robotics engages with well-being in relation to disabled people. Our objective was to answer the following research question: how and to what extent does the AI/ML/robot literature we covered include well-being in relation to disabled people? We found 2071 academic abstracts covering AI/ML and well-being, and 1055 covering robotics and well-being. Within these abstracts, only 39 covered AI/ML and 48 robotics and well-being in relation to disabled people. The tone of the coverage was techno-positive and techno-optimistic arguing that AI/ML/robotics could improve the well-being of disabled people in general or improve well-being by helping disabled people overcome their ‘disability’ or make tasks easier. No negative effects that AI/ML/robotics could have or has had on the well-being of disabled people were mentioned. Disabled people were portrayed only within patient, client, or user roles but not in their roles as stakeholders in the governance of AI/ML/robotics discussions. This biased and limited coverage of the impact of AI/ML/robotics on the well-being of disabled people disempowers disabled people.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2537 - 2555"},"PeriodicalIF":2.9,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125153956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dancing with robots: acceptability of humanoid companions to reduce loneliness during COVID-19 (and beyond) 与机器人共舞:COVID-19(及以后)期间减少孤独感的仿人伴侣的可接受性
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-15 DOI: 10.1007/s00146-023-01738-6
Guy Moshe Ross

The purpose of this research is to explore the acceptance of social robots as companions. Understanding what affects the acceptance of humanoid companions may give society tools that will help people overcome loneliness throughout pandemics, such as COVID-19 and beyond. Based on regulatory focus theory, it is proposed that there is a relationship between goal-directed motivation and acceptance of robots as companions. The theory of regulatory focus posits that goal-directed behavior is regulated by two motivational systems—promotion and prevention. People with a promotion focus are concerned about accomplishments, are sensitive to the presence and absence of positive outcomes (gains/non-gains), and have a strategic preference for eager means of goal-pursuit. People with a prevention focus are concerned about security and safety, are sensitive to the absence and presence of negative outcomes (non-losses/losses), and have a strategic preference for vigilant means. Two studies support the notion of a relationship between acceptance of robots as companions and regulatory focus. In Study 1, chronic promotion focus was associated with acceptance of robots, and this association was mediated by loneliness. The weaker the promotion focus, the stronger was the sense of loneliness, and thus the higher was the acceptance of the robots. In Study 2, a situationally induced regulatory focus moderated the association between acceptance of robots and COVID-19 perceived severity. The higher the perceived severity of the disease, the higher was the willingness to accept the robots, and the effect was stronger for an induced prevention (vs. promotion) focus. Models of acceptance of robots are presented. Implications for well-being are discussed.

这项研究的目的是探索人们对作为伴侣的社交机器人的接受程度。了解是什么影响了人们对仿人伴侣的接受度,这可能会为社会提供一些工具,帮助人们在COVID-19等流行病期间及以后克服孤独感。根据调节焦点理论,我们提出目标导向动机与接受机器人作为伴侣之间存在关系。调控焦点理论认为,目标导向行为受两个动机系统--促进和预防--的调控。具有促进重点的人关注成就,对积极结果(收益/非收益)的存在与否很敏感,并对急于实现目标的手段具有战略偏好。注重预防的人关注安全和保障,对消极结果(非损失/损失)的存在和不存在很敏感,在战略上偏好警惕的手段。有两项研究支持接受机器人作为伴侣与监管重点之间存在关系的观点。在研究 1 中,长期的晋升焦点与机器人的接受度相关,而这种关联是以孤独感为中介的。促进关注越弱,孤独感就越强,因此对机器人的接受度就越高。在研究 2 中,情境诱导的监管焦点调节了机器人接受度与 COVID-19 感知严重性之间的关联。感知到的疾病严重程度越高,接受机器人的意愿就越高,而这种效应在诱导性预防(相对于促进)关注点时更强。本文提出了接受机器人的模型。讨论了对幸福的影响。
{"title":"Dancing with robots: acceptability of humanoid companions to reduce loneliness during COVID-19 (and beyond)","authors":"Guy Moshe Ross","doi":"10.1007/s00146-023-01738-6","DOIUrl":"10.1007/s00146-023-01738-6","url":null,"abstract":"<div><p>The purpose of this research is to explore the acceptance of social robots as companions. Understanding what affects the acceptance of humanoid companions may give society tools that will help people overcome loneliness throughout pandemics, such as COVID-19 and beyond. Based on regulatory focus theory, it is proposed that there is a relationship between goal-directed motivation and acceptance of robots as companions. The theory of regulatory focus posits that goal-directed behavior is regulated by two motivational systems—promotion and prevention. People with a promotion focus are concerned about accomplishments, are sensitive to the presence and absence of positive outcomes (gains/non-gains), and have a strategic preference for eager means of goal-pursuit. People with a prevention focus are concerned about security and safety, are sensitive to the absence and presence of negative outcomes (non-losses/losses), and have a strategic preference for vigilant means. Two studies support the notion of a relationship between acceptance of robots as companions and regulatory focus. In Study 1, chronic promotion focus was associated with acceptance of robots, and this association was mediated by loneliness. The weaker the promotion focus, the stronger was the sense of loneliness, and thus the higher was the acceptance of the robots. In Study 2, a situationally induced regulatory focus moderated the association between acceptance of robots and COVID-19 perceived severity. The higher the perceived severity of the disease, the higher was the willingness to accept the robots, and the effect was stronger for an induced prevention (vs. promotion) focus. Models of acceptance of robots are presented. Implications for well-being are discussed.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2557 - 2568"},"PeriodicalIF":2.9,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122074432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1