针对抑郁症患者的人工智能对话中大型语言模型的人性化和鲁棒性的作用:批判性分析

IF 4.8 2区 医学 Q1 PSYCHIATRY Jmir Mental Health Pub Date : 2024-07-02 DOI:10.2196/56569
Andrea Ferrario, Jana Sedlakova, Manuel Trachsel
{"title":"针对抑郁症患者的人工智能对话中大型语言模型的人性化和鲁棒性的作用:批判性分析","authors":"Andrea Ferrario, Jana Sedlakova, Manuel Trachsel","doi":"10.2196/56569","DOIUrl":null,"url":null,"abstract":"<p><strong>Unlabelled: </strong>Large language model (LLM)-powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect the use of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of patients with depression: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate \"human-like\" features with LLMs and what role these systems should play in interactions with humans. Further, ensuring the contextualization of the robustness of LLMs requires considering the specificities of language production in individuals with depression, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"11 ","pages":"e56569"},"PeriodicalIF":4.8000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11231450/pdf/","citationCount":"0","resultStr":"{\"title\":\"The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis.\",\"authors\":\"Andrea Ferrario, Jana Sedlakova, Manuel Trachsel\",\"doi\":\"10.2196/56569\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Unlabelled: </strong>Large language model (LLM)-powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect the use of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of patients with depression: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate \\\"human-like\\\" features with LLMs and what role these systems should play in interactions with humans. Further, ensuring the contextualization of the robustness of LLMs requires considering the specificities of language production in individuals with depression, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.</p>\",\"PeriodicalId\":48616,\"journal\":{\"name\":\"Jmir Mental Health\",\"volume\":\"11 \",\"pages\":\"e56569\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2024-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11231450/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Jmir Mental Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.2196/56569\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHIATRY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Jmir Mental Health","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/56569","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0

摘要

无标签:大语言模型(LLM)驱动的服务在情感分析和回答问题等许多任务中表现出色,因此在各种应用中越来越受欢迎。最近,研究人员一直在探索它们在数字健康领域的潜在用途,尤其是在心理健康领域。然而,实施 LLM 增强型会话人工智能(CAI)在伦理、技术和临床方面都面临着巨大的挑战。在这篇观点论文中,我们讨论了影响将 LLM 增强型 CAI 用于有心理健康问题的个人的两个挑战,重点是抑郁症患者的使用案例:LLM 增强型 CAI 的人性化倾向及其缺乏语境稳健性。我们的方法是跨学科的,依赖于哲学、心理学和计算机科学的考虑。我们认为,LLM 增强型 CAI 的人性化取决于对使用 LLM 模拟 "类人 "特征的意义以及这些系统在与人类互动中应扮演的角色的反思。此外,要确保 LLM 的鲁棒性,就必须考虑抑郁症患者语言生成的特殊性及其随时间的演变。最后,我们提出了一系列建议,以促进负责任地设计和部署 LLM 增强型 CAI,为抑郁症患者提供治疗支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis.

Unlabelled: Large language model (LLM)-powered services are gaining popularity in various applications due to their exceptional performance in many tasks, such as sentiment analysis and answering questions. Recently, research has been exploring their potential use in digital health contexts, particularly in the mental health domain. However, implementing LLM-enhanced conversational artificial intelligence (CAI) presents significant ethical, technical, and clinical challenges. In this viewpoint paper, we discuss 2 challenges that affect the use of LLM-enhanced CAI for individuals with mental health issues, focusing on the use case of patients with depression: the tendency to humanize LLM-enhanced CAI and their lack of contextualized robustness. Our approach is interdisciplinary, relying on considerations from philosophy, psychology, and computer science. We argue that the humanization of LLM-enhanced CAI hinges on the reflection of what it means to simulate "human-like" features with LLMs and what role these systems should play in interactions with humans. Further, ensuring the contextualization of the robustness of LLMs requires considering the specificities of language production in individuals with depression, as well as its evolution over time. Finally, we provide a series of recommendations to foster the responsible design and deployment of LLM-enhanced CAI for the therapeutic support of individuals with depression.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Jmir Mental Health
Jmir Mental Health Medicine-Psychiatry and Mental Health
CiteScore
10.80
自引率
3.80%
发文量
104
审稿时长
16 weeks
期刊介绍: JMIR Mental Health (JMH, ISSN 2368-7959) is a PubMed-indexed, peer-reviewed sister journal of JMIR, the leading eHealth journal (Impact Factor 2016: 5.175). JMIR Mental Health focusses on digital health and Internet interventions, technologies and electronic innovations (software and hardware) for mental health, addictions, online counselling and behaviour change. This includes formative evaluation and system descriptions, theoretical papers, review papers, viewpoint/vision papers, and rigorous evaluations.
期刊最新文献
Evaluation of a Guided Chatbot Intervention for Young People in Jordan: Feasibility Randomized Controlled Trial. Evaluating the Effectiveness of InsightApp for Anxiety, Valued Action, and Psychological Resilience: Longitudinal Randomized Controlled Trial. Exploring the Differentiation of Self-Concepts in the Physical and Virtual Worlds Using Euclidean Distance Analysis and Its Relationship With Digitalization and Mental Health Among Young People: Cross-Sectional Study. Testing the Feasibility, Acceptability, and Potential Efficacy of an Innovative Digital Mental Health Care Delivery Model Designed to Increase Access to Care: Open Trial of the Digital Clinic. Role of Tailored Timing and Frequency Prompts on the Efficacy of an Internet-Delivered Stress Recovery Intervention for Health Care Workers: Randomized Controlled Trial.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1