Socialisation approach to AI value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence

Joel Janhonen
{"title":"Socialisation approach to AI value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence","authors":"Joel Janhonen","doi":"10.1007/s43681-023-00372-8","DOIUrl":null,"url":null,"abstract":"<div><p>This article describes an alternative starting point for embedding human values into artificial intelligence (AI) systems. As applications of AI become more versatile and entwined with society, an ever-wider spectrum of considerations must be incorporated into their decision-making. However, formulating less-tangible human values into mathematical algorithms appears incredibly challenging. This difficulty is understandable from a viewpoint that perceives human moral decisions to primarily stem from intuition and emotional dispositions, rather than logic or reason. Our innate normative judgements promote prosocial behaviours which enable collaboration within a shared environment. Individuals internalise the values and norms of their social context through socialisation. The complexity of the social environment makes it impractical to consistently apply logic to pick the best available action. This has compelled natural agents to develop mental shortcuts and rely on the collective moral wisdom of the social group. This work argues that the acquisition of human values cannot happen just through rational thinking, and hence, alternative approaches should be explored. Designing receptiveness to social signalling can provide context-flexible normative guidance in vastly different life tasks. This approach would approximate the human trajectory for value learning, which requires social ability. Artificial agents that imitate socialisation would prioritise conformity by minimising detected or expected disapproval while associating relative importance with acquired concepts. Sensitivity to direct social feedback would especially be useful for AI that possesses some embodied physical or virtual form. Work explores the necessary faculties for social norm enforcement and the ethical challenges of navigating based on the approval of others.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"527 - 553"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00372-8.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-023-00372-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This article describes an alternative starting point for embedding human values into artificial intelligence (AI) systems. As applications of AI become more versatile and entwined with society, an ever-wider spectrum of considerations must be incorporated into their decision-making. However, formulating less-tangible human values into mathematical algorithms appears incredibly challenging. This difficulty is understandable from a viewpoint that perceives human moral decisions to primarily stem from intuition and emotional dispositions, rather than logic or reason. Our innate normative judgements promote prosocial behaviours which enable collaboration within a shared environment. Individuals internalise the values and norms of their social context through socialisation. The complexity of the social environment makes it impractical to consistently apply logic to pick the best available action. This has compelled natural agents to develop mental shortcuts and rely on the collective moral wisdom of the social group. This work argues that the acquisition of human values cannot happen just through rational thinking, and hence, alternative approaches should be explored. Designing receptiveness to social signalling can provide context-flexible normative guidance in vastly different life tasks. This approach would approximate the human trajectory for value learning, which requires social ability. Artificial agents that imitate socialisation would prioritise conformity by minimising detected or expected disapproval while associating relative importance with acquired concepts. Sensitivity to direct social feedback would especially be useful for AI that possesses some embodied physical or virtual form. Work explores the necessary faculties for social norm enforcement and the ethical challenges of navigating based on the approval of others.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人工智能价值获取的社会化方法:实现灵活的道德导航,内置接受社会影响的能力
本文描述了将人类价值观嵌入人工智能(AI)系统的另一种起点。随着人工智能的应用变得更加多样化,并与社会交织在一起,他们的决策必须纳入更广泛的考虑因素。然而,将无形的人类价值表述为数学算法似乎极具挑战性。这种困难是可以理解的,因为人们认为人类的道德决定主要源于直觉和情感倾向,而不是逻辑或理性。我们天生的规范性判断会促进亲社会行为,从而在共享环境中实现协作。个体通过社会化将其社会环境的价值观和规范内化。社会环境的复杂性使得始终如一地应用逻辑来选择最佳可用行动变得不切实际。这迫使自然行为者发展出心理捷径,并依赖于社会群体的集体道德智慧。这项工作认为,人类价值的获取不能仅仅通过理性思考来实现,因此,应该探索其他方法。设计对社会信号的接受能力可以为不同的生活任务提供上下文灵活的规范指导。这种方法将近似于人类价值学习的轨迹,这需要社会能力。模仿社会化的人工代理会通过最小化被检测到的或预期的不赞成来优先考虑一致性,同时将相对重要性与获得的概念联系起来。对直接社会反馈的敏感性对于拥有某些具体物理或虚拟形式的AI来说尤其有用。作品探讨了社会规范执行的必要能力,以及基于他人认可的导航的道德挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Scenario-based sociotechnical envisioning (SSE): an approach to enhance systemic risk assessments Perceptions and predictors of trust in artificial intelligence use: ethical implications and regulatory oversight in Ghana Temporal authorship as a moral right: bounded inquiry in persistent AI delegation Machine Learning in Education Still unsafe: what’s holding us back on online safety for women
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1