Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults

Alex John London;Yosef S. Razin;Jason Borenstein;Motahhare Eslami;Russell Perkins;Paul Robinette
{"title":"Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults","authors":"Alex John London;Yosef S. Razin;Jason Borenstein;Motahhare Eslami;Russell Perkins;Paul Robinette","doi":"10.1109/TTS.2023.3237124","DOIUrl":null,"url":null,"abstract":"This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility. We conclude by considering how near-future SAs could exacerbate current worries about privacy, commodification of users, trust calibration and injustice.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 4","pages":"291-301"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10017383/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility. We conclude by considering how near-future SAs could exacerbate current worries about privacy, commodification of users, trust calibration and injustice.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
面向老年人的近未来社交支持型智能助理的伦理问题
本文探讨了与近未来人工智能(AI)系统有关的新伦理问题,这些系统旨在支持、维持或增强老年人在衰老和认知能力衰退时的能力。我们尤其关注智能助理(SAs),它们将寻求提供积极主动的帮助,并调解用户与其社交或支持网络中其他成员之间的社交互动。如果这类系统能减轻老年人在执行任务时的认知负担,帮助他们保持自主性和独立性,那么它们将对用户及其护理人员产生巨大的潜在效用。然而,即使是简单的任务,例如为用户提供会议或谈话摘要,也需要未来的 SA 参与人类互动的伦理方面,而目前的计算系统很难识别、跟踪和导航这些方面。如果 SA 无法感知社会互动中与道德相关的方面,那么由此产生的道德辨别力缺陷将威胁到用户自主性和福祉的重要方面。在描述了产生这些道德挑战的动力之后,我们注意到促使用户监督此类系统的简单策略也可能会削弱它们的效用。最后,我们考虑了不久的将来,智能系统会如何加剧当前对隐私、用户商品化、信任校准和不公正的担忧。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
2024 Index IEEE Transactions on Technology and Society Vol. 5 Front Cover Table of Contents IEEE Transactions on Technology and Society Publication Information In This Special: Co-Designing Consumer Technology With Society
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1