Is ChatGPT knowledgeable of acute coronary syndromes and pertinent European Society of Cardiology Guidelines?

IF 1.4 4区 医学 Q3 CARDIAC & CARDIOVASCULAR SYSTEMS Minerva cardiology and angiology Pub Date : 2024-06-01 Epub Date: 2024-02-23 DOI:10.23736/S2724-5683.24.06517-7
Dogac C Gurbuz, Eser Varis
{"title":"Is ChatGPT knowledgeable of acute coronary syndromes and pertinent European Society of Cardiology Guidelines?","authors":"Dogac C Gurbuz, Eser Varis","doi":"10.23736/S2724-5683.24.06517-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Advancements in artificial intelligence are being seen in multiple fields, including medicine, and this trend is likely to continue going forward. To analyze the accuracy and reproducibility of ChatGPT answers about acute coronary syndromes (ACS).</p><p><strong>Methods: </strong>The questions asked to ChatGPT were prepared in two categories. A list of frequently asked questions (FAQs) created from inquiries asked by the public and while preparing the scientific question list, 2023 European Society of Cardiology (ESC) Guidelines for the management of ACS and ESC Clinical Practice Guidelines were used. Accuracy and reproducibility of ChatGPT responses about ACS were evaluated by two cardiologists with ten years of experience using Global Quality Score (GQS).</p><p><strong>Results: </strong>Eventually, 72 FAQs related to ACS met the study inclusion criteria. In total, 65 (90.3%) ChatGPT answers scored GQS 5, which indicated highest accuracy and proficiency. None of the ChatGPT responses to FAQs about ACS scored GQS 1. In addition, highest accuracy and reliability of ChatGPT answers was obtained for the prevention and lifestyle section with GQS 5 for 19 (95%) answers, and GQS 4 for 1 (5%) answer. In contrast, accuracy and proficiency of ChatGPT answers were lowest for the treatment and management section. Moreover, 68 (88.3%) ChatGPT responses for guideline based questions scored GQS 5. Reproducibility of ChatGPT answers was 94.4% for FAQs and 90.9% for ESC guidelines questions.</p><p><strong>Conclusions: </strong>This study shows for the first time that ChatGPT can give accurate and sufficient responses to more than 90% of FAQs about ACS. In addition, proficiency and correctness of ChatGPT answers about questions depending on ESC guidelines was also substantial.</p>","PeriodicalId":18668,"journal":{"name":"Minerva cardiology and angiology","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Minerva cardiology and angiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.23736/S2724-5683.24.06517-7","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/23 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"CARDIAC & CARDIOVASCULAR SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Advancements in artificial intelligence are being seen in multiple fields, including medicine, and this trend is likely to continue going forward. To analyze the accuracy and reproducibility of ChatGPT answers about acute coronary syndromes (ACS).

Methods: The questions asked to ChatGPT were prepared in two categories. A list of frequently asked questions (FAQs) created from inquiries asked by the public and while preparing the scientific question list, 2023 European Society of Cardiology (ESC) Guidelines for the management of ACS and ESC Clinical Practice Guidelines were used. Accuracy and reproducibility of ChatGPT responses about ACS were evaluated by two cardiologists with ten years of experience using Global Quality Score (GQS).

Results: Eventually, 72 FAQs related to ACS met the study inclusion criteria. In total, 65 (90.3%) ChatGPT answers scored GQS 5, which indicated highest accuracy and proficiency. None of the ChatGPT responses to FAQs about ACS scored GQS 1. In addition, highest accuracy and reliability of ChatGPT answers was obtained for the prevention and lifestyle section with GQS 5 for 19 (95%) answers, and GQS 4 for 1 (5%) answer. In contrast, accuracy and proficiency of ChatGPT answers were lowest for the treatment and management section. Moreover, 68 (88.3%) ChatGPT responses for guideline based questions scored GQS 5. Reproducibility of ChatGPT answers was 94.4% for FAQs and 90.9% for ESC guidelines questions.

Conclusions: This study shows for the first time that ChatGPT can give accurate and sufficient responses to more than 90% of FAQs about ACS. In addition, proficiency and correctness of ChatGPT answers about questions depending on ESC guidelines was also substantial.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ChatGPT 是否了解急性冠状动脉综合征和相关的欧洲心脏病学会指南?
背景:人工智能在包括医学在内的多个领域都取得了进步,而且这种趋势很可能会持续下去。目的:分析 ChatGPT 有关急性冠状动脉综合征(ACS)答案的准确性和可重复性:方法:向 ChatGPT 提出的问题分为两类。在准备科学问题列表时,使用了 2023 年欧洲心脏病学会 (ESC) ACS 管理指南和 ESC 临床实践指南。由两位拥有十年经验的心脏病专家使用全球质量评分(GQS)评估了有关 ACS 的 ChatGPT 回答的准确性和可重复性:最终,72 个与 ACS 相关的常见问题符合研究纳入标准。共有 65 个(90.3%)ChatGPT 答案获得了 GQS 5 分,表明准确性和熟练程度最高。此外,预防和生活方式部分的 ChatGPT 答案的准确性和可靠性最高,19 个(95%)答案的 GQS 为 5,1 个(5%)答案的 GQS 为 4。相比之下,治疗和管理部分的 ChatGPT 答案准确度和熟练度最低。此外,68 个(88.3%)基于指南问题的 ChatGPT 答案获得了 GQS 5 分。常见问题和 ESC 指南问题的 ChatGPT 答案重复率分别为 94.4% 和 90.9%:本研究首次表明,ChatGPT 可以准确、充分地回答 90% 以上有关 ACS 的常见问题。此外,ChatGPT 对有关 ESC 指南问题的回答的熟练度和正确率也很高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Minerva cardiology and angiology
Minerva cardiology and angiology CARDIAC & CARDIOVASCULAR SYSTEMS-
CiteScore
2.60
自引率
18.80%
发文量
118
期刊最新文献
Acurate neo2 is associated with a reduced inflammatory response in patients with severe aortic stenosis undergoing transcatheter aortic valve implantation. Association between instantaneous heart rate sequence during the awake period and cardiovascular events: a study based on Sleep Heart Health Study. Cardiology 2.0: the (r)age of the machines? Clinical implications of Sokolow-Lyon voltage less than 3.5 mV in patients who have undergone transcatheter aortic valve replacement. Do publications of machine learning models justify the enthusiasm they generate?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1