聊天机器人与心理健康:洞察生成式人工智能的安全性

IF 4 2区 管理学 Q2 BUSINESS Journal of Consumer Psychology Pub Date : 2023-10-26 DOI:10.1002/jcpy.1393
Julian De Freitas, Ahmet Kaan Uğuralp, Zeliha Oğuz-Uğuralp, Stefano Puntoni
{"title":"聊天机器人与心理健康:洞察生成式人工智能的安全性","authors":"Julian De Freitas,&nbsp;Ahmet Kaan Uğuralp,&nbsp;Zeliha Oğuz-Uğuralp,&nbsp;Stefano Puntoni","doi":"10.1002/jcpy.1393","DOIUrl":null,"url":null,"abstract":"<p>Chatbots are now able to engage in sophisticated conversations with consumers. Due to the “black box” nature of the algorithms, it is impossible to predict in advance how these conversations will unfold. Behavioral research provides little insight into potential safety issues emerging from the current rapid deployment of this technology at scale. We begin to address this urgent question by focusing on the context of mental health and “companion AI”: Applications designed to provide consumers with synthetic interaction partners. Studies 1a and 1b present field evidence: Actual consumer interactions with two different companion AIs. Study 2 reports an extensive performance test of several commercially available companion AIs. Study 3 is an experiment testing consumer reaction to risky and unhelpful chatbot responses. The findings show that (1) mental health crises are apparent in a nonnegligible minority of conversations with users; (2) companion AIs are often unable to recognize, and respond appropriately to, signs of distress; and (3) consumers display negative reactions to unhelpful and risky chatbot responses, highlighting emerging reputational risks for generative AI companies.</p>","PeriodicalId":48365,"journal":{"name":"Journal of Consumer Psychology","volume":null,"pages":null},"PeriodicalIF":4.0000,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Chatbots and mental health: Insights into the safety of generative AI\",\"authors\":\"Julian De Freitas,&nbsp;Ahmet Kaan Uğuralp,&nbsp;Zeliha Oğuz-Uğuralp,&nbsp;Stefano Puntoni\",\"doi\":\"10.1002/jcpy.1393\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Chatbots are now able to engage in sophisticated conversations with consumers. Due to the “black box” nature of the algorithms, it is impossible to predict in advance how these conversations will unfold. Behavioral research provides little insight into potential safety issues emerging from the current rapid deployment of this technology at scale. We begin to address this urgent question by focusing on the context of mental health and “companion AI”: Applications designed to provide consumers with synthetic interaction partners. Studies 1a and 1b present field evidence: Actual consumer interactions with two different companion AIs. Study 2 reports an extensive performance test of several commercially available companion AIs. Study 3 is an experiment testing consumer reaction to risky and unhelpful chatbot responses. The findings show that (1) mental health crises are apparent in a nonnegligible minority of conversations with users; (2) companion AIs are often unable to recognize, and respond appropriately to, signs of distress; and (3) consumers display negative reactions to unhelpful and risky chatbot responses, highlighting emerging reputational risks for generative AI companies.</p>\",\"PeriodicalId\":48365,\"journal\":{\"name\":\"Journal of Consumer Psychology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2023-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Consumer Psychology\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/jcpy.1393\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Consumer Psychology","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jcpy.1393","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

摘要

聊天机器人现在能够与消费者进行复杂的对话。由于算法的 "黑箱 "性质,我们无法提前预测这些对话将如何展开。行为研究对当前大规模快速部署该技术可能产生的安全问题几乎没有提供深入的见解。我们从心理健康和 "人工智能伴侣 "入手,来解决这个紧迫的问题:旨在为消费者提供合成互动伙伴的应用。研究 1a 和 1b 提供了实地证据:消费者与两种不同的人工智能伴侣的实际互动。研究 2 报告了对几种市售人工智能伴侣的广泛性能测试。研究 3 是一项实验,测试消费者对有风险和无帮助的聊天机器人回复的反应。研究结果表明:(1) 在与用户的对话中,心理健康危机明显存在于不可忽视的少数对话中;(2) 伴侣人工智能通常无法识别并适当回应困扰迹象;(3) 消费者对聊天机器人的无益和有风险的回应表现出负面反应,凸显了生成式人工智能公司新出现的声誉风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Chatbots and mental health: Insights into the safety of generative AI

Chatbots are now able to engage in sophisticated conversations with consumers. Due to the “black box” nature of the algorithms, it is impossible to predict in advance how these conversations will unfold. Behavioral research provides little insight into potential safety issues emerging from the current rapid deployment of this technology at scale. We begin to address this urgent question by focusing on the context of mental health and “companion AI”: Applications designed to provide consumers with synthetic interaction partners. Studies 1a and 1b present field evidence: Actual consumer interactions with two different companion AIs. Study 2 reports an extensive performance test of several commercially available companion AIs. Study 3 is an experiment testing consumer reaction to risky and unhelpful chatbot responses. The findings show that (1) mental health crises are apparent in a nonnegligible minority of conversations with users; (2) companion AIs are often unable to recognize, and respond appropriately to, signs of distress; and (3) consumers display negative reactions to unhelpful and risky chatbot responses, highlighting emerging reputational risks for generative AI companies.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.40
自引率
14.60%
发文量
51
期刊介绍: The Journal of Consumer Psychology is devoted to psychological perspectives on the study of the consumer. It publishes articles that contribute both theoretically and empirically to an understanding of psychological processes underlying consumers thoughts, feelings, decisions, and behaviors. Areas of emphasis include, but are not limited to, consumer judgment and decision processes, attitude formation and change, reactions to persuasive communications, affective experiences, consumer information processing, consumer-brand relationships, affective, cognitive, and motivational determinants of consumer behavior, family and group decision processes, and cultural and individual differences in consumer behavior.
期刊最新文献
Issue Information Refining and expanding applications of Moral Foundations Theory in consumer psychology Message framing to enhance consumer compliance with disease detection communication for prevention: The moderating role of age AI‐induced dehumanization The model‐sizing dilemma: The use of varied female model sizes helps the impressions of brand values but hurts shopping ease
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1