ChatGPT与口腔癌:信息可靠性研究。

IF 2.6 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE BMC Oral Health Pub Date : 2025-01-17 DOI:10.1186/s12903-025-05479-4
Mesude Çi Ti R
{"title":"ChatGPT与口腔癌:信息可靠性研究。","authors":"Mesude Çi Ti R","doi":"10.1186/s12903-025-05479-4","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) and large language models (LLMs) like ChatGPT have transformed information retrieval, including in healthcare. ChatGPT, trained on diverse datasets, can provide medical advice but faces ethical and accuracy concerns. This study evaluates the accuracy of ChatGPT-3.5's answers to frequently asked questions about oral cancer, a condition where early diagnosis is crucial for improving patient outcomes.</p><p><strong>Methods: </strong>A total of 20 questions were asked to ChatGPT-3.5, selected from Google Trends and questions asked by patients in the clinic. The responses provided by ChatGPT were evaluated for accuracy by medical oncologists and oral and maxillofacial radiologists. Inter-rater agreement was assessed using Fleiss's and Cohen kappa tests. The scores given by the specialties were compared with the Mann-Whitney U test. The references provided by ChatGPT-3.5 were evaluated for authenticity.</p><p><strong>Results: </strong>Of the 80 responses from 20 questions, 41 (51.25%) were rated as very good, 37 (46.25%) as good, 2 (2.50%) as acceptable. There was no significant difference between oral and maxillofacial radiologists and medical oncologists in all 20 questions. Of the 81 references to ChatGPT-3.5 answers, only 13 were scientific articles, 10 were fake, and the remaining references were data from websites.</p><p><strong>Conclusion: </strong>ChatGPT provided reliable information about oral cancer and did not provide incorrect information and suggestions. However, all information provided by ChatGPT is not based on real references.</p>","PeriodicalId":9072,"journal":{"name":"BMC Oral Health","volume":"25 1","pages":"86"},"PeriodicalIF":2.6000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11745001/pdf/","citationCount":"0","resultStr":"{\"title\":\"ChatGPT and oral cancer: a study on informational reliability.\",\"authors\":\"Mesude Çi Ti R\",\"doi\":\"10.1186/s12903-025-05479-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Artificial intelligence (AI) and large language models (LLMs) like ChatGPT have transformed information retrieval, including in healthcare. ChatGPT, trained on diverse datasets, can provide medical advice but faces ethical and accuracy concerns. This study evaluates the accuracy of ChatGPT-3.5's answers to frequently asked questions about oral cancer, a condition where early diagnosis is crucial for improving patient outcomes.</p><p><strong>Methods: </strong>A total of 20 questions were asked to ChatGPT-3.5, selected from Google Trends and questions asked by patients in the clinic. The responses provided by ChatGPT were evaluated for accuracy by medical oncologists and oral and maxillofacial radiologists. Inter-rater agreement was assessed using Fleiss's and Cohen kappa tests. The scores given by the specialties were compared with the Mann-Whitney U test. The references provided by ChatGPT-3.5 were evaluated for authenticity.</p><p><strong>Results: </strong>Of the 80 responses from 20 questions, 41 (51.25%) were rated as very good, 37 (46.25%) as good, 2 (2.50%) as acceptable. There was no significant difference between oral and maxillofacial radiologists and medical oncologists in all 20 questions. Of the 81 references to ChatGPT-3.5 answers, only 13 were scientific articles, 10 were fake, and the remaining references were data from websites.</p><p><strong>Conclusion: </strong>ChatGPT provided reliable information about oral cancer and did not provide incorrect information and suggestions. However, all information provided by ChatGPT is not based on real references.</p>\",\"PeriodicalId\":9072,\"journal\":{\"name\":\"BMC Oral Health\",\"volume\":\"25 1\",\"pages\":\"86\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11745001/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Oral Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s12903-025-05479-4\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Oral Health","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12903-025-05479-4","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0

摘要

背景:ChatGPT等人工智能(AI)和大型语言模型(llm)已经改变了信息检索,包括医疗保健领域。ChatGPT经过不同数据集的训练,可以提供医疗建议,但面临道德和准确性方面的问题。这项研究评估了ChatGPT-3.5回答口腔癌常见问题的准确性,早期诊断对改善患者预后至关重要。方法:在ChatGPT-3.5中选取谷歌Trends和临床患者提问的问题共20个问题。医学肿瘤学家和口腔颌面放射科医生对ChatGPT提供的应答进行了准确性评估。使用Fleiss和Cohen kappa测试评估评分者之间的一致性。各专业给出的分数与Mann-Whitney U测试进行了比较。对ChatGPT-3.5提供的参考文献进行真实性评估。结果:在20个问题的80个回答中,非常好41个(51.25%),良好37个(46.25%),可接受2个(2.50%)。口腔颌面放射科医师与内科肿瘤科医师在所有20个问题上均无显著差异。在ChatGPT-3.5答案的81篇参考文献中,只有13篇是科学文章,10篇是假的,其余的参考文献都是来自网站的数据。结论:ChatGPT提供了可靠的口腔癌信息,没有提供错误的信息和建议。但是,ChatGPT提供的所有信息并非基于真实参考。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ChatGPT and oral cancer: a study on informational reliability.

Background: Artificial intelligence (AI) and large language models (LLMs) like ChatGPT have transformed information retrieval, including in healthcare. ChatGPT, trained on diverse datasets, can provide medical advice but faces ethical and accuracy concerns. This study evaluates the accuracy of ChatGPT-3.5's answers to frequently asked questions about oral cancer, a condition where early diagnosis is crucial for improving patient outcomes.

Methods: A total of 20 questions were asked to ChatGPT-3.5, selected from Google Trends and questions asked by patients in the clinic. The responses provided by ChatGPT were evaluated for accuracy by medical oncologists and oral and maxillofacial radiologists. Inter-rater agreement was assessed using Fleiss's and Cohen kappa tests. The scores given by the specialties were compared with the Mann-Whitney U test. The references provided by ChatGPT-3.5 were evaluated for authenticity.

Results: Of the 80 responses from 20 questions, 41 (51.25%) were rated as very good, 37 (46.25%) as good, 2 (2.50%) as acceptable. There was no significant difference between oral and maxillofacial radiologists and medical oncologists in all 20 questions. Of the 81 references to ChatGPT-3.5 answers, only 13 were scientific articles, 10 were fake, and the remaining references were data from websites.

Conclusion: ChatGPT provided reliable information about oral cancer and did not provide incorrect information and suggestions. However, all information provided by ChatGPT is not based on real references.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
BMC Oral Health
BMC Oral Health DENTISTRY, ORAL SURGERY & MEDICINE-
CiteScore
3.90
自引率
6.90%
发文量
481
审稿时长
6-12 weeks
期刊介绍: BMC Oral Health is an open access, peer-reviewed journal that considers articles on all aspects of the prevention, diagnosis and management of disorders of the mouth, teeth and gums, as well as related molecular genetics, pathophysiology, and epidemiology.
期刊最新文献
Effect of different ceramic materials and dentin sealing on occlusal veneers bond strength and fracture resistance. Correlates of oral health-related quality of life in a sample of patients with rheumatoid arthritis. Effectiveness of a novel amine + zinc + fluoride toothpaste in reducing plaque and gingivitis: results of a six-month randomized controlled trial. Performance of artificial intelligence on cervical vertebral maturation assessment: a systematic review and meta-analysis. Photobiomodulation preconditioning for oral mucositis prevention and quality of life improvement in chemotherapy patients: a randomized clinical trial.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1