Calissa J Leslie-Miller,Stacey L Simon,Kelsey Dean,Nadine Mokhallati,Christopher C Cushing
{"title":"The critical need for expert oversight of ChatGPT: Prompt engineering for safeguarding child healthcare information.","authors":"Calissa J Leslie-Miller,Stacey L Simon,Kelsey Dean,Nadine Mokhallati,Christopher C Cushing","doi":"10.1093/jpepsy/jsae075","DOIUrl":null,"url":null,"abstract":"OBJECTIVE\r\nChatGPT and other large language models have the potential to transform the health information landscape online. However, lack of domain-specific expertise and known errors in large language models raise concerns about the widespread adoption of content generated by these tools for parents making healthcare decisions for their children. The aim of this study is to determine if health-related text generated by ChatGPT under the supervision of an expert is comparable to that generated by an expert regarding persuasiveness and credibility from the perspective of a parent.\r\n\r\nMETHODS\r\nIn a cross-sectional study 116 parents aged 18-65 years (M = 45.02, SD = 10.92) were asked to complete a baseline assessment of their behavioral intentions regarding pediatric healthcare topics. Subsequently, participants were asked to rate text generated by either an expert or by ChatGPT under supervision of an expert.\r\n\r\nRESULTS\r\nResults indicate that prompt engineered ChatGPT is capable of impacting behavioral intentions for medication, sleep, and diet decision-making. Additionally, there was little distinction between prompt engineered ChatGPT and content experts on perceived morality, trustworthiness, expertise, accuracy, and reliance. Notably, when differences were present, prompt engineered ChatGPT was rated as higher in trustworthiness and accuracy, and participants indicated they would be more likely to rely on the information presented by prompt engineered ChatGPT compared to the expert.\r\n\r\nDISCUSSION\r\nGiven that parents will trust and rely on information generated by ChatGPT, it is critically important that human domain-specific expertise be applied to healthcare information that will ultimately be presented to consumers (e.g., parents).","PeriodicalId":48372,"journal":{"name":"Journal of Pediatric Psychology","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pediatric Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1093/jpepsy/jsae075","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, DEVELOPMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
OBJECTIVE
ChatGPT and other large language models have the potential to transform the health information landscape online. However, lack of domain-specific expertise and known errors in large language models raise concerns about the widespread adoption of content generated by these tools for parents making healthcare decisions for their children. The aim of this study is to determine if health-related text generated by ChatGPT under the supervision of an expert is comparable to that generated by an expert regarding persuasiveness and credibility from the perspective of a parent.
METHODS
In a cross-sectional study 116 parents aged 18-65 years (M = 45.02, SD = 10.92) were asked to complete a baseline assessment of their behavioral intentions regarding pediatric healthcare topics. Subsequently, participants were asked to rate text generated by either an expert or by ChatGPT under supervision of an expert.
RESULTS
Results indicate that prompt engineered ChatGPT is capable of impacting behavioral intentions for medication, sleep, and diet decision-making. Additionally, there was little distinction between prompt engineered ChatGPT and content experts on perceived morality, trustworthiness, expertise, accuracy, and reliance. Notably, when differences were present, prompt engineered ChatGPT was rated as higher in trustworthiness and accuracy, and participants indicated they would be more likely to rely on the information presented by prompt engineered ChatGPT compared to the expert.
DISCUSSION
Given that parents will trust and rely on information generated by ChatGPT, it is critically important that human domain-specific expertise be applied to healthcare information that will ultimately be presented to consumers (e.g., parents).
期刊介绍:
The Journal of Pediatric Psychology is the official journal of the Society of Pediatric Psychology, Division 54 of the American Psychological Association. The Journal of Pediatric Psychology publishes articles related to theory, research, and professional practice in pediatric psychology. Pediatric psychology is an integrated field of science and practice in which the principles of psychology are applied within the context of pediatric health. The field aims to promote the health and development of children, adolescents, and their families through use of evidence-based methods.