The critical need for expert oversight of ChatGPT: Prompt engineering for safeguarding child healthcare information.

IF 2.7 3区 心理学 Q2 PSYCHOLOGY, DEVELOPMENTAL Journal of Pediatric Psychology Pub Date : 2024-09-13 DOI:10.1093/jpepsy/jsae075
Calissa J Leslie-Miller,Stacey L Simon,Kelsey Dean,Nadine Mokhallati,Christopher C Cushing
{"title":"The critical need for expert oversight of ChatGPT: Prompt engineering for safeguarding child healthcare information.","authors":"Calissa J Leslie-Miller,Stacey L Simon,Kelsey Dean,Nadine Mokhallati,Christopher C Cushing","doi":"10.1093/jpepsy/jsae075","DOIUrl":null,"url":null,"abstract":"OBJECTIVE\r\nChatGPT and other large language models have the potential to transform the health information landscape online. However, lack of domain-specific expertise and known errors in large language models raise concerns about the widespread adoption of content generated by these tools for parents making healthcare decisions for their children. The aim of this study is to determine if health-related text generated by ChatGPT under the supervision of an expert is comparable to that generated by an expert regarding persuasiveness and credibility from the perspective of a parent.\r\n\r\nMETHODS\r\nIn a cross-sectional study 116 parents aged 18-65 years (M = 45.02, SD = 10.92) were asked to complete a baseline assessment of their behavioral intentions regarding pediatric healthcare topics. Subsequently, participants were asked to rate text generated by either an expert or by ChatGPT under supervision of an expert.\r\n\r\nRESULTS\r\nResults indicate that prompt engineered ChatGPT is capable of impacting behavioral intentions for medication, sleep, and diet decision-making. Additionally, there was little distinction between prompt engineered ChatGPT and content experts on perceived morality, trustworthiness, expertise, accuracy, and reliance. Notably, when differences were present, prompt engineered ChatGPT was rated as higher in trustworthiness and accuracy, and participants indicated they would be more likely to rely on the information presented by prompt engineered ChatGPT compared to the expert.\r\n\r\nDISCUSSION\r\nGiven that parents will trust and rely on information generated by ChatGPT, it is critically important that human domain-specific expertise be applied to healthcare information that will ultimately be presented to consumers (e.g., parents).","PeriodicalId":48372,"journal":{"name":"Journal of Pediatric Psychology","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pediatric Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1093/jpepsy/jsae075","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, DEVELOPMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

OBJECTIVE ChatGPT and other large language models have the potential to transform the health information landscape online. However, lack of domain-specific expertise and known errors in large language models raise concerns about the widespread adoption of content generated by these tools for parents making healthcare decisions for their children. The aim of this study is to determine if health-related text generated by ChatGPT under the supervision of an expert is comparable to that generated by an expert regarding persuasiveness and credibility from the perspective of a parent. METHODS In a cross-sectional study 116 parents aged 18-65 years (M = 45.02, SD = 10.92) were asked to complete a baseline assessment of their behavioral intentions regarding pediatric healthcare topics. Subsequently, participants were asked to rate text generated by either an expert or by ChatGPT under supervision of an expert. RESULTS Results indicate that prompt engineered ChatGPT is capable of impacting behavioral intentions for medication, sleep, and diet decision-making. Additionally, there was little distinction between prompt engineered ChatGPT and content experts on perceived morality, trustworthiness, expertise, accuracy, and reliance. Notably, when differences were present, prompt engineered ChatGPT was rated as higher in trustworthiness and accuracy, and participants indicated they would be more likely to rely on the information presented by prompt engineered ChatGPT compared to the expert. DISCUSSION Given that parents will trust and rely on information generated by ChatGPT, it is critically important that human domain-specific expertise be applied to healthcare information that will ultimately be presented to consumers (e.g., parents).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
对 ChatGPT 进行专家监督的迫切需要:保护儿童医疗保健信息的及时工程。
目的EhatGPT 和其他大型语言模型有可能改变在线健康信息的格局。然而,由于缺乏特定领域的专业知识以及大型语言模型中已知的错误,人们对广泛采用这些工具生成的内容为父母为子女做出医疗保健决定表示担忧。本研究的目的是确定从家长的角度来看,ChatGPT 在专家监督下生成的健康相关文本在说服力和可信度方面是否与专家生成的文本具有可比性。方法在一项横断面研究中,116 位年龄在 18-65 岁之间的家长(中=45.02,标差=10.92)被要求完成一项关于儿科医疗保健主题的行为意向基线评估。随后,参与者被要求对由专家或在专家监督下由 ChatGPT 生成的文本进行评分。结果表明,由提示引擎生成的 ChatGPT 能够影响用药、睡眠和饮食决策的行为意向。此外,在感知道德性、可信度、专业性、准确性和依赖性方面,提示工程化 ChatGPT 和内容专家几乎没有区别。值得注意的是,当存在差异时,提示式工程 ChatGPT 在可信度和准确性方面的评分更高,而且参与者表示,与专家相比,他们更有可能依赖提示式工程 ChatGPT 提供的信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Pediatric Psychology
Journal of Pediatric Psychology PSYCHOLOGY, DEVELOPMENTAL-
CiteScore
6.00
自引率
11.10%
发文量
89
期刊介绍: The Journal of Pediatric Psychology is the official journal of the Society of Pediatric Psychology, Division 54 of the American Psychological Association. The Journal of Pediatric Psychology publishes articles related to theory, research, and professional practice in pediatric psychology. Pediatric psychology is an integrated field of science and practice in which the principles of psychology are applied within the context of pediatric health. The field aims to promote the health and development of children, adolescents, and their families through use of evidence-based methods.
期刊最新文献
Supporting healthy sleep: a qualitative assessment of adolescents with type 1 diabetes and their parents. Differential item functioning of the revised Multigroup Ethnic Identity Measure (MEIM-R) in racially and income diverse youth with type 1 diabetes. Inclusive measure development: amplifying the voices of adolescents and young adults with spina bifida in a new measure of benefit-finding and growth. Posttraumatic stress symptoms in parents of children with newly diagnosed cancer: 1-year trajectories and relationship variables as predictors. Latent profiles and predictors of barriers to care in Swiss children and adolescents with rare diseases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1