{"title":"用于家长幼儿龋齿教育的 ChatGPT:是敌是友?","authors":"Rawan Elkarmi, Suha Abu-Ghazaleh, Hawazen Sonbol, Ola Haha, Alaa Al-Haddad, Yazan Hassona","doi":"10.1111/ipd.13283","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>With the increasing popularity of online sources for health information, parents may seek information related to early childhood caries (ECC) from artificial intelligence-based chatbots.</p><p><strong>Aim: </strong>The aim of this article was to evaluate the usefulness, quality, reliability, and readability of ChatGPT answers to parents' questions about ECC.</p><p><strong>Design: </strong>Eighty questions commonly asked about ECC were compiled from experts and keyword research tools. ChatGPT 3.5 was asked these questions independently. The answers were evaluated by experts in paediatric dentistry.</p><p><strong>Results: </strong>ChatGPT provided \"very useful\" and \"useful\" responses to 82.5% of the questions. The mean global quality score was 4.3 ± 1 (good quality). The mean reliability score was 18.5 ± 8.9 (average to very good). The mean understandability score was 59.5% ± 13.8 (not highly understandable), and the mean actionability score was 40.5% ± 12.8 (low actionability). The mean Flesch-Kincaid reading ease score was 32% ± 25.7, and the mean Simple Measure of Gobbledygook index readability score was 15.3 ± 9.1(indicating poor readability for the lay person). Misleading and false information were detected in some answers.</p><p><strong>Conclusion: </strong>ChatGPT has significant potential as a tool for answering parent's questions about ECC. Concerns, however, do exist about the readability and actionability of the answers. The presence of false information should not be overlooked.</p>","PeriodicalId":14268,"journal":{"name":"International journal of paediatric dentistry","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT for parents' education about early childhood caries: A friend or foe?\",\"authors\":\"Rawan Elkarmi, Suha Abu-Ghazaleh, Hawazen Sonbol, Ola Haha, Alaa Al-Haddad, Yazan Hassona\",\"doi\":\"10.1111/ipd.13283\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>With the increasing popularity of online sources for health information, parents may seek information related to early childhood caries (ECC) from artificial intelligence-based chatbots.</p><p><strong>Aim: </strong>The aim of this article was to evaluate the usefulness, quality, reliability, and readability of ChatGPT answers to parents' questions about ECC.</p><p><strong>Design: </strong>Eighty questions commonly asked about ECC were compiled from experts and keyword research tools. ChatGPT 3.5 was asked these questions independently. The answers were evaluated by experts in paediatric dentistry.</p><p><strong>Results: </strong>ChatGPT provided \\\"very useful\\\" and \\\"useful\\\" responses to 82.5% of the questions. The mean global quality score was 4.3 ± 1 (good quality). The mean reliability score was 18.5 ± 8.9 (average to very good). The mean understandability score was 59.5% ± 13.8 (not highly understandable), and the mean actionability score was 40.5% ± 12.8 (low actionability). The mean Flesch-Kincaid reading ease score was 32% ± 25.7, and the mean Simple Measure of Gobbledygook index readability score was 15.3 ± 9.1(indicating poor readability for the lay person). Misleading and false information were detected in some answers.</p><p><strong>Conclusion: </strong>ChatGPT has significant potential as a tool for answering parent's questions about ECC. Concerns, however, do exist about the readability and actionability of the answers. The presence of false information should not be overlooked.</p>\",\"PeriodicalId\":14268,\"journal\":{\"name\":\"International journal of paediatric dentistry\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International journal of paediatric dentistry\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1111/ipd.13283\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of paediatric dentistry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/ipd.13283","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
ChatGPT for parents' education about early childhood caries: A friend or foe?
Background: With the increasing popularity of online sources for health information, parents may seek information related to early childhood caries (ECC) from artificial intelligence-based chatbots.
Aim: The aim of this article was to evaluate the usefulness, quality, reliability, and readability of ChatGPT answers to parents' questions about ECC.
Design: Eighty questions commonly asked about ECC were compiled from experts and keyword research tools. ChatGPT 3.5 was asked these questions independently. The answers were evaluated by experts in paediatric dentistry.
Results: ChatGPT provided "very useful" and "useful" responses to 82.5% of the questions. The mean global quality score was 4.3 ± 1 (good quality). The mean reliability score was 18.5 ± 8.9 (average to very good). The mean understandability score was 59.5% ± 13.8 (not highly understandable), and the mean actionability score was 40.5% ± 12.8 (low actionability). The mean Flesch-Kincaid reading ease score was 32% ± 25.7, and the mean Simple Measure of Gobbledygook index readability score was 15.3 ± 9.1(indicating poor readability for the lay person). Misleading and false information were detected in some answers.
Conclusion: ChatGPT has significant potential as a tool for answering parent's questions about ECC. Concerns, however, do exist about the readability and actionability of the answers. The presence of false information should not be overlooked.
期刊介绍:
The International Journal of Paediatric Dentistry was formed in 1991 by the merger of the Journals of the International Association of Paediatric Dentistry and the British Society of Paediatric Dentistry and is published bi-monthly. It has true international scope and aims to promote the highest standard of education, practice and research in paediatric dentistry world-wide.
International Journal of Paediatric Dentistry publishes papers on all aspects of paediatric dentistry including: growth and development, behaviour management, diagnosis, prevention, restorative treatment and issue relating to medically compromised children or those with disabilities. This peer-reviewed journal features scientific articles, reviews, case reports, clinical techniques, short communications and abstracts of current paediatric dental research. Analytical studies with a scientific novelty value are preferred to descriptive studies. Case reports illustrating unusual conditions and clinically relevant observations are acceptable but must be of sufficiently high quality to be considered for publication; particularly the illustrative material must be of the highest quality.