关于情感人工智能的操纵:英国成年人的观点和治理影响

V. Bakir, Alexander Laffer, Andrew McStay, Diana Miranda, Lachlan Urquhart
{"title":"关于情感人工智能的操纵:英国成年人的观点和治理影响","authors":"V. Bakir, Alexander Laffer, Andrew McStay, Diana Miranda, Lachlan Urquhart","doi":"10.3389/fsoc.2024.1339834","DOIUrl":null,"url":null,"abstract":"With growing commercial, regulatory and scholarly interest in use of Artificial Intelligence (AI) to profile and interact with human emotion (“emotional AI”), attention is turning to its capacity for manipulating people, relating to factors impacting on a person’s decisions and behavior. Given prior social disquiet about AI and profiling technologies, surprisingly little is known on people’s views on the benefits and harms of emotional AI technologies, especially their capacity for manipulation. This matters because regulators of AI (such as in the European Union and the UK) wish to stimulate AI innovation, minimize harms and build public trust in these systems, but to do so they should understand the public’s expectations. Addressing this, we ascertain UK adults’ perspectives on the potential of emotional AI technologies for manipulating people through a two-stage study. Stage One (the qualitative phase) uses design fiction principles to generate adequate understanding and informed discussion in 10 focus groups with diverse participants (n = 46) on how emotional AI technologies may be used in a range of mundane, everyday settings. The focus groups primarily flagged concerns about manipulation in two settings: emotion profiling in social media (involving deepfakes, false information and conspiracy theories), and emotion profiling in child oriented “emotoys” (where the toy responds to the child’s facial and verbal expressions). In both these settings, participants express concerns that emotion profiling covertly exploits users’ cognitive or affective weaknesses and vulnerabilities; additionally, in the social media setting, participants express concerns that emotion profiling damages people’s capacity for rational thought and action. To explore these insights at a larger scale, Stage Two (the quantitative phase), conducts a UK-wide, demographically representative national survey (n = 2,068) on attitudes toward emotional AI. Taking care to avoid leading and dystopian framings of emotional AI, we find that large majorities express concern about the potential for being manipulated through social media and emotoys. In addition to signaling need for civic protections and practical means of ensuring trust in emerging technologies, the research also leads us to provide a policy-friendly subdivision of what is meant by manipulation through emotional AI and related technologies.","PeriodicalId":507974,"journal":{"name":"Frontiers in Sociology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On manipulation by emotional AI: UK adults’ views and governance implications\",\"authors\":\"V. Bakir, Alexander Laffer, Andrew McStay, Diana Miranda, Lachlan Urquhart\",\"doi\":\"10.3389/fsoc.2024.1339834\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With growing commercial, regulatory and scholarly interest in use of Artificial Intelligence (AI) to profile and interact with human emotion (“emotional AI”), attention is turning to its capacity for manipulating people, relating to factors impacting on a person’s decisions and behavior. Given prior social disquiet about AI and profiling technologies, surprisingly little is known on people’s views on the benefits and harms of emotional AI technologies, especially their capacity for manipulation. This matters because regulators of AI (such as in the European Union and the UK) wish to stimulate AI innovation, minimize harms and build public trust in these systems, but to do so they should understand the public’s expectations. Addressing this, we ascertain UK adults’ perspectives on the potential of emotional AI technologies for manipulating people through a two-stage study. Stage One (the qualitative phase) uses design fiction principles to generate adequate understanding and informed discussion in 10 focus groups with diverse participants (n = 46) on how emotional AI technologies may be used in a range of mundane, everyday settings. The focus groups primarily flagged concerns about manipulation in two settings: emotion profiling in social media (involving deepfakes, false information and conspiracy theories), and emotion profiling in child oriented “emotoys” (where the toy responds to the child’s facial and verbal expressions). In both these settings, participants express concerns that emotion profiling covertly exploits users’ cognitive or affective weaknesses and vulnerabilities; additionally, in the social media setting, participants express concerns that emotion profiling damages people’s capacity for rational thought and action. To explore these insights at a larger scale, Stage Two (the quantitative phase), conducts a UK-wide, demographically representative national survey (n = 2,068) on attitudes toward emotional AI. Taking care to avoid leading and dystopian framings of emotional AI, we find that large majorities express concern about the potential for being manipulated through social media and emotoys. In addition to signaling need for civic protections and practical means of ensuring trust in emerging technologies, the research also leads us to provide a policy-friendly subdivision of what is meant by manipulation through emotional AI and related technologies.\",\"PeriodicalId\":507974,\"journal\":{\"name\":\"Frontiers in Sociology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Sociology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fsoc.2024.1339834\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Sociology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fsoc.2024.1339834","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着商业、监管和学术界对使用人工智能(AI)来剖析人类情感并与之互动("情感 AI")的兴趣与日俱增,人们的注意力开始转向人工智能操纵人类的能力,以及影响人类决策和行为的因素。鉴于此前社会对人工智能和情感分析技术的担忧,人们对情感人工智能技术的利弊,尤其是其操纵能力的看法却知之甚少,令人惊讶。这一点很重要,因为人工智能监管机构(如欧盟和英国)希望刺激人工智能创新、最大限度地减少危害并建立公众对这些系统的信任,但要做到这一点,他们应该了解公众的期望。针对这一问题,我们通过两个阶段的研究,确定了英国成年人对情感人工智能技术操纵人的潜力的看法。第一阶段(定性阶段)采用设计虚构原则,在 10 个焦点小组中与不同参与者(n = 46)就情感人工智能技术如何在一系列平凡的日常环境中使用展开充分的理解和知情讨论。焦点小组主要关注两种情况下的操纵问题:社交媒体中的情感分析(涉及深度伪造、虚假信息和阴谋论),以及面向儿童的 "表情玩具 "中的情感分析(玩具会对儿童的面部和语言表达做出反应)。在这两种情况下,参与者都表示担心情绪貌相会暗中利用用户的认知或情感弱点和薄弱环节;此外,在社交媒体环境中,参与者还表示担心情绪貌相会损害人们的理性思维和行动能力。为了在更大范围内探索这些见解,第二阶段(定量阶段)在英国范围内开展了一项具有人口代表性的全国性调查(n = 2,068),调查对象是人们对情感人工智能的态度。在谨慎避免对情感人工智能进行引导性和乌托邦式描述的同时,我们发现大多数人对通过社交媒体和表情包被操纵的可能性表示担忧。除了表明需要公民保护和实际手段来确保对新兴技术的信任之外,这项研究还引导我们对通过情感人工智能和相关技术进行操纵的含义进行了政策友好的细分。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
On manipulation by emotional AI: UK adults’ views and governance implications
With growing commercial, regulatory and scholarly interest in use of Artificial Intelligence (AI) to profile and interact with human emotion (“emotional AI”), attention is turning to its capacity for manipulating people, relating to factors impacting on a person’s decisions and behavior. Given prior social disquiet about AI and profiling technologies, surprisingly little is known on people’s views on the benefits and harms of emotional AI technologies, especially their capacity for manipulation. This matters because regulators of AI (such as in the European Union and the UK) wish to stimulate AI innovation, minimize harms and build public trust in these systems, but to do so they should understand the public’s expectations. Addressing this, we ascertain UK adults’ perspectives on the potential of emotional AI technologies for manipulating people through a two-stage study. Stage One (the qualitative phase) uses design fiction principles to generate adequate understanding and informed discussion in 10 focus groups with diverse participants (n = 46) on how emotional AI technologies may be used in a range of mundane, everyday settings. The focus groups primarily flagged concerns about manipulation in two settings: emotion profiling in social media (involving deepfakes, false information and conspiracy theories), and emotion profiling in child oriented “emotoys” (where the toy responds to the child’s facial and verbal expressions). In both these settings, participants express concerns that emotion profiling covertly exploits users’ cognitive or affective weaknesses and vulnerabilities; additionally, in the social media setting, participants express concerns that emotion profiling damages people’s capacity for rational thought and action. To explore these insights at a larger scale, Stage Two (the quantitative phase), conducts a UK-wide, demographically representative national survey (n = 2,068) on attitudes toward emotional AI. Taking care to avoid leading and dystopian framings of emotional AI, we find that large majorities express concern about the potential for being manipulated through social media and emotoys. In addition to signaling need for civic protections and practical means of ensuring trust in emerging technologies, the research also leads us to provide a policy-friendly subdivision of what is meant by manipulation through emotional AI and related technologies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Secularism as a human right: learning from the European Court of Human Rights Homophobic beliefs and attitudes among mid-adolescent boys: exploring the ideas of hybrid masculinities On manipulation by emotional AI: UK adults’ views and governance implications Social exclusion, corruption, recall of authorities, inequality and fiscal centralization: inducers of social conflict in Peru (2016–2023) Feminist perspectives on environmental justice and health in Jamaica
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1