{"title":"揭示源头:认识如何改变对人工智能和人类产生的心理健康反应的看法","authors":"Gagan Jain , Samridhi Pareek , Per Carlbring","doi":"10.1016/j.invent.2024.100745","DOIUrl":null,"url":null,"abstract":"<div><p>In mental health care, the integration of artificial intelligence (AI) into internet interventions could significantly improve scalability and accessibility, provided that AI is perceived as being as effective as human professionals. This longitudinal study investigates the comparative perceptions of ChatGPT and human mental health support professionals across three dimensions: authenticity, professionalism, and practicality. Initially, 140 participants evaluated responses from both sources without knowing their origin, revealing that AI-generated responses were rated significantly higher across all dimensions. Six months later, the same cohort (n = 111) reassessed these messages with the source of each response disclosed, aiming to understand the impact of source transparency on perceptions and trust towards AI. The results indicate a shift in perception towards human responses, only in terms of authenticity (Cohen's d = 0.45) and reveal a significant correlation between trust in AI and its practicality rating (r = 0.25), but not with authenticity or professionalism. A comparative analysis between blind and informed evaluations revealed a significant shift in favour of human response ratings (Cohen's d = 0.42–0.57), while AI response ratings experienced minimal variation. These findings highlight the nuanced acceptance and role of AI in mental health support, emphasizing that the disclosure of the response source significantly shapes perceptions and trust in AI-generated assistance.</p></div>","PeriodicalId":48615,"journal":{"name":"Internet Interventions-The Application of Information Technology in Mental and Behavioural Health","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2214782924000381/pdfft?md5=b59efae2eec3d7973679ab7b97a46467&pid=1-s2.0-S2214782924000381-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Revealing the source: How awareness alters perceptions of AI and human-generated mental health responses\",\"authors\":\"Gagan Jain , Samridhi Pareek , Per Carlbring\",\"doi\":\"10.1016/j.invent.2024.100745\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In mental health care, the integration of artificial intelligence (AI) into internet interventions could significantly improve scalability and accessibility, provided that AI is perceived as being as effective as human professionals. This longitudinal study investigates the comparative perceptions of ChatGPT and human mental health support professionals across three dimensions: authenticity, professionalism, and practicality. Initially, 140 participants evaluated responses from both sources without knowing their origin, revealing that AI-generated responses were rated significantly higher across all dimensions. Six months later, the same cohort (n = 111) reassessed these messages with the source of each response disclosed, aiming to understand the impact of source transparency on perceptions and trust towards AI. The results indicate a shift in perception towards human responses, only in terms of authenticity (Cohen's d = 0.45) and reveal a significant correlation between trust in AI and its practicality rating (r = 0.25), but not with authenticity or professionalism. A comparative analysis between blind and informed evaluations revealed a significant shift in favour of human response ratings (Cohen's d = 0.42–0.57), while AI response ratings experienced minimal variation. These findings highlight the nuanced acceptance and role of AI in mental health support, emphasizing that the disclosure of the response source significantly shapes perceptions and trust in AI-generated assistance.</p></div>\",\"PeriodicalId\":48615,\"journal\":{\"name\":\"Internet Interventions-The Application of Information Technology in Mental and Behavioural Health\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-04-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2214782924000381/pdfft?md5=b59efae2eec3d7973679ab7b97a46467&pid=1-s2.0-S2214782924000381-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Internet Interventions-The Application of Information Technology in Mental and Behavioural Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214782924000381\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Interventions-The Application of Information Technology in Mental and Behavioural Health","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214782924000381","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
摘要
在心理健康护理领域,将人工智能(AI)整合到互联网干预措施中可以显著提高可扩展性和可及性,前提是人们认为人工智能与人类专业人员一样有效。这项纵向研究调查了 ChatGPT 和人类心理健康支持专业人员在真实性、专业性和实用性三个方面的比较看法。最初,140 名参与者在不知道两种来源的回复来源的情况下对其进行了评估,结果显示,人工智能生成的回复在所有维度上的评分都明显更高。六个月后,同一批参与者(n = 111)重新评估了这些信息,并披露了每条回复的来源,旨在了解来源透明度对人工智能认知和信任的影响。结果表明,仅在真实性方面,人们对人工回复的看法发生了转变(Cohen's d = 0.45),并揭示了人工智能信任度与其实用性评级之间的显著相关性(r = 0.25),但与真实性或专业性无关。对盲评和知情评测的比较分析表明,人工智能的响应评级在很大程度上倾向于人类(Cohen's d = 0.42-0.57),而人工智能的响应评级变化极小。这些研究结果突显了人工智能在心理健康支持中的细微接受度和作用,强调了回复来源的公开性在很大程度上影响了人们对人工智能生成的援助的看法和信任。
Revealing the source: How awareness alters perceptions of AI and human-generated mental health responses
In mental health care, the integration of artificial intelligence (AI) into internet interventions could significantly improve scalability and accessibility, provided that AI is perceived as being as effective as human professionals. This longitudinal study investigates the comparative perceptions of ChatGPT and human mental health support professionals across three dimensions: authenticity, professionalism, and practicality. Initially, 140 participants evaluated responses from both sources without knowing their origin, revealing that AI-generated responses were rated significantly higher across all dimensions. Six months later, the same cohort (n = 111) reassessed these messages with the source of each response disclosed, aiming to understand the impact of source transparency on perceptions and trust towards AI. The results indicate a shift in perception towards human responses, only in terms of authenticity (Cohen's d = 0.45) and reveal a significant correlation between trust in AI and its practicality rating (r = 0.25), but not with authenticity or professionalism. A comparative analysis between blind and informed evaluations revealed a significant shift in favour of human response ratings (Cohen's d = 0.42–0.57), while AI response ratings experienced minimal variation. These findings highlight the nuanced acceptance and role of AI in mental health support, emphasizing that the disclosure of the response source significantly shapes perceptions and trust in AI-generated assistance.
期刊介绍:
Official Journal of the European Society for Research on Internet Interventions (ESRII) and the International Society for Research on Internet Interventions (ISRII).
The aim of Internet Interventions is to publish scientific, peer-reviewed, high-impact research on Internet interventions and related areas.
Internet Interventions welcomes papers on the following subjects:
• Intervention studies targeting the promotion of mental health and featuring the Internet and/or technologies using the Internet as an underlying technology, e.g. computers, smartphone devices, tablets, sensors
• Implementation and dissemination of Internet interventions
• Integration of Internet interventions into existing systems of care
• Descriptions of development and deployment infrastructures
• Internet intervention methodology and theory papers
• Internet-based epidemiology
• Descriptions of new Internet-based technologies and experiments with clinical applications
• Economics of internet interventions (cost-effectiveness)
• Health care policy and Internet interventions
• The role of culture in Internet intervention
• Internet psychometrics
• Ethical issues pertaining to Internet interventions and measurements
• Human-computer interaction and usability research with clinical implications
• Systematic reviews and meta-analysis on Internet interventions