Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards.

IF 4.1 Q1 HEALTH CARE SCIENCES & SERVICES BMJ Health & Care Informatics Pub Date : 2023-10-01 DOI:10.1136/bmjhci-2023-100830
Richard Hr Roberts, Stephen R Ali, Hayley A Hutchings, Thomas D Dobbs, Iain S Whitaker
{"title":"Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards.","authors":"Richard Hr Roberts,&nbsp;Stephen R Ali,&nbsp;Hayley A Hutchings,&nbsp;Thomas D Dobbs,&nbsp;Iain S Whitaker","doi":"10.1136/bmjhci-2023-100830","DOIUrl":null,"url":null,"abstract":"Introduction Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis. Methods We compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient. Results Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p<0.001) and ‘trial registration’ (r=0.34, p=0.002), whereas the weakest were in ‘intervention’ (r=0.02, p<0.001) and ‘objective’ (r=0.06, p<0.001). Conclusion LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes.","PeriodicalId":9050,"journal":{"name":"BMJ Health & Care Informatics","volume":"30 1","pages":""},"PeriodicalIF":4.1000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10583079/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Health & Care Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjhci-2023-100830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis. Methods We compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient. Results Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p<0.001) and ‘trial registration’ (r=0.34, p=0.002), whereas the weakest were in ‘intervention’ (r=0.02, p<0.001) and ‘objective’ (r=0.06, p<0.001). Conclusion LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes.

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ChatGPT和人类评估人员根据公认的报告标准评估医学文献的比较研究。
引言:在临床医生保持医学研究最新进展的挑战中,人工智能(AI)工具,如大型语言模型(LLM)ChatGPT,可以自动评估研究质量,节省时间并减少偏见。这项研究将ChatGPT3的熟练程度与人类对摘要评分的评估进行了比较,以确定其作为证据合成工具的潜力。方法:我们将ChatGPT对种植牙摘要的评分与使用摘要报告标准清单的综合报告试验标准的人类评估者进行了比较,得出总体依从性评分(OCS)。Bland-Altman分析评估了人类和人工智能生成的OCS百分比之间的一致性。额外的误差分析包括OCS分量表的平均差、Welch t检验和Pearson相关系数。结果:Bland-Altman分析显示,人类评估和ChatGPT之间的OCS平均差异为4.92%(95%CI 0.62%,0.37%)。误差分析在大多数领域中显示出较小的平均差异,在“结论”中最高(0.764(95%CI 0.186,0.280),在“致盲”中最低(0.034(95%CI 0.818/0.895))(r=0.32,p结论:像ChatGPT这样的LLM可以帮助自动化医学文献的评估,有助于识别准确报告的研究。ChatGPT的可能应用包括在医学数据库中集成以进行抽象评估。目前的限制包括令牌限制,将其使用限制在摘要上。随着人工智能技术的进步,像GPT4这样的未来版本可能会关闭er更可靠、全面的评估,加强高质量研究的识别,并有可能改善患者的预后。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.10
自引率
4.90%
发文量
40
审稿时长
18 weeks
期刊最新文献
Scaling equitable artificial intelligence in healthcare with machine learning operations. Understanding prescribing errors for system optimisation: the technology-related error mechanism classification. Detection of hypertension from pharyngeal images using deep learning algorithm in primary care settings in Japan. PubMed captures more fine-grained bibliographic data on scientific commentary than Web of Science: a comparative analysis. Method to apply temporal graph analysis on electronic patient record data to explore healthcare professional-patient interaction intensity: a cohort study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1