‘Hypernudging’: a threat to moral autonomy?

Isabel Richards
{"title":"‘Hypernudging’: a threat to moral autonomy?","authors":"Isabel Richards","doi":"10.1007/s43681-024-00449-y","DOIUrl":null,"url":null,"abstract":"<div><p>It is well-recognised that cognitive irrationalities can be exploited to influence behaviour. ‘Hypernudging’ was coined by Karen Yeung to describe a powerful version of this phenomenon seen in digital systems that use large quantities of user data and machine learning to guide decision-making in highly personalised ways. Authors have worried about the societal impacts of the use of these capabilities at scale in commercial systems but have only begun to articulate them concretely. In this paper I look to elucidate one concern of this sort by focusing specifically on the employment of these techniques within social media and considering how it threatens our autonomy in forming moral judgments. By moral judgments I mean our judgments of someone’s actions or character as good versus bad. A threat to our autonomy in forming these is of real concern because moral judgments and their associated beliefs provide a critical backdrop for what is deemed acceptable in society, both individually and collectively and therefore what futures are possible and probable.</p><p>In the first two sections I introduce a psychological model that describes how humans reach moral judgments and the conditions under which it can and cannot be considered autonomous. In the third section I describe how hypernudging within a social media context creates the relevant problematic conditions so as to constitute a threat to our autonomy in forming moral judgments. In the fourth section I explore some practical measures that could be taken to protect moral autonomy. I conclude with some indicative evidence that this threat is not experienced uniformly across all societies, pointing to interesting future areas of research.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1121 - 1131"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00449-y.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00449-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

It is well-recognised that cognitive irrationalities can be exploited to influence behaviour. ‘Hypernudging’ was coined by Karen Yeung to describe a powerful version of this phenomenon seen in digital systems that use large quantities of user data and machine learning to guide decision-making in highly personalised ways. Authors have worried about the societal impacts of the use of these capabilities at scale in commercial systems but have only begun to articulate them concretely. In this paper I look to elucidate one concern of this sort by focusing specifically on the employment of these techniques within social media and considering how it threatens our autonomy in forming moral judgments. By moral judgments I mean our judgments of someone’s actions or character as good versus bad. A threat to our autonomy in forming these is of real concern because moral judgments and their associated beliefs provide a critical backdrop for what is deemed acceptable in society, both individually and collectively and therefore what futures are possible and probable.

In the first two sections I introduce a psychological model that describes how humans reach moral judgments and the conditions under which it can and cannot be considered autonomous. In the third section I describe how hypernudging within a social media context creates the relevant problematic conditions so as to constitute a threat to our autonomy in forming moral judgments. In the fourth section I explore some practical measures that could be taken to protect moral autonomy. I conclude with some indicative evidence that this threat is not experienced uniformly across all societies, pointing to interesting future areas of research.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
过度哺乳":对道德自主的威胁?
众所周知,人们可以利用认知上的非理性来影响行为。“超助推”(Hypernudging)一词由Karen Yeung创造,用来描述这种现象在数字系统中的一个强大版本,即使用大量用户数据和机器学习以高度个性化的方式指导决策。作者担心在商业系统中大规模使用这些能力会对社会产生影响,但只是开始具体地阐述它们。在本文中,我希望通过特别关注社交媒体中这些技术的使用,并考虑它如何威胁我们形成道德判断的自主权,来阐明这类问题。我所说的道德判断是指我们对某人的行为或性格好坏的判断。真正令人担忧的是,在形成这些原则的过程中,对我们自主性的威胁,因为道德判断和与之相关的信念,为社会中个人和集体的可接受性提供了一个关键的背景,从而决定了未来的可能和可能性。在前两节中,我介绍了一个心理学模型,该模型描述了人类如何达到道德判断,以及在哪些条件下道德判断可以被认为是自主的,哪些条件下不能被认为是自主的。在第三部分中,我描述了社交媒体背景下的过度推动如何创造相关的问题条件,从而对我们形成道德判断的自主权构成威胁。第四部分探讨了保护道德自主的一些实际措施。我总结了一些指示性的证据,表明这种威胁并非在所有社会中都有,这指出了未来有趣的研究领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Against AI ethics: challenging the conventional narratives Legitimate expectations in the age of innovation Dehumanising education: AI and the capitalist capture of teaching An overview of AI ethics: moral concerns through the lens of principles, lived realities and power structures Justification optional: ChatGPT’s advice can still influence human judgments about moral dilemmas
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1