ChatGPT and Factual Knowledge Questions Regarding Clinical Pharmacy: Response to Letter to the Editor

Merel van Nuland PharmD, PhD
{"title":"ChatGPT and Factual Knowledge Questions Regarding Clinical Pharmacy: Response to Letter to the Editor","authors":"Merel van Nuland PharmD, PhD","doi":"10.1002/jcph.2481","DOIUrl":null,"url":null,"abstract":"<p>Dear Editor,</p><p>The discourse surrounding the article titled “Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy” warrants further examination and critique. The study undertook an evaluation of ChatGPT's efficacy in responding to factual knowledge questions concerning clinical pharmacy. Through a series of 264 questions, ChatGPT's responses were analyzed for accuracy, consistency, quality of the substantiation, and reproducibility, yielding notable results. ChatGPT demonstrated a 79% correctness rate, surpassing the 66% accuracy rate of pharmacists.</p><p>Acknowledging the limitations outlined in the discussion section, it is important to note that this study solely focused on factual knowledge questions. The primary objective was to determine ChatGPT's performance in responding to factual knowledge questions rather than its proficiency in clinical reasoning. Consequently, the study refrained from drawing conclusions regarding ChatGPT's impact on clinical decision-making, as this aspect falls under the scope of separate research endeavors.<span><sup>1</sup></span></p><p>Addressing the limitations, we argue that the scale of 264 questions, and a lack of variety are limitations of this study. The number of questions aligns with similar studies such as the USMLE Step 1, comprising 280 questions,<span><sup>2</sup></span> and the Taiwanese pharmacist licensing examination, consisting of 431 questions.<span><sup>3</sup></span> Additionally, the span of topics covered in our questions is deemed representative of a pharmacist's factual knowledge base within clinical pharmacy.</p><p>The authors acknowledge the need for further investigation into ChatGPT's clinical applicability, for example, with longitudinal studies. Furthermore, exploring ChatGPT's capacity to provide justifications and explanations for its responses could augment its efficacy in aiding pharmacist decision-making processes. Continuous refinement and augmentation of ChatGPT are essential to strengthen its functionality as a tool for pharmacists in the clinic. Still, the indispensable expertise and interpretive skills of clinical pharmacists is pivotal to applying this information in the clinic. The factual information produced by ChatGPT holds potential as a valuable resource, however, it is imperative that the responses undergo rigorous assessment for accuracy and clinical applicability under the scrutiny of clinical pharmacists.</p><p>Sincerely,</p><p>Merel van Nuland</p>","PeriodicalId":22751,"journal":{"name":"The Journal of Clinical Pharmacology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jcph.2481","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Clinical Pharmacology","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jcph.2481","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Dear Editor,

The discourse surrounding the article titled “Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy” warrants further examination and critique. The study undertook an evaluation of ChatGPT's efficacy in responding to factual knowledge questions concerning clinical pharmacy. Through a series of 264 questions, ChatGPT's responses were analyzed for accuracy, consistency, quality of the substantiation, and reproducibility, yielding notable results. ChatGPT demonstrated a 79% correctness rate, surpassing the 66% accuracy rate of pharmacists.

Acknowledging the limitations outlined in the discussion section, it is important to note that this study solely focused on factual knowledge questions. The primary objective was to determine ChatGPT's performance in responding to factual knowledge questions rather than its proficiency in clinical reasoning. Consequently, the study refrained from drawing conclusions regarding ChatGPT's impact on clinical decision-making, as this aspect falls under the scope of separate research endeavors.1

Addressing the limitations, we argue that the scale of 264 questions, and a lack of variety are limitations of this study. The number of questions aligns with similar studies such as the USMLE Step 1, comprising 280 questions,2 and the Taiwanese pharmacist licensing examination, consisting of 431 questions.3 Additionally, the span of topics covered in our questions is deemed representative of a pharmacist's factual knowledge base within clinical pharmacy.

The authors acknowledge the need for further investigation into ChatGPT's clinical applicability, for example, with longitudinal studies. Furthermore, exploring ChatGPT's capacity to provide justifications and explanations for its responses could augment its efficacy in aiding pharmacist decision-making processes. Continuous refinement and augmentation of ChatGPT are essential to strengthen its functionality as a tool for pharmacists in the clinic. Still, the indispensable expertise and interpretive skills of clinical pharmacists is pivotal to applying this information in the clinic. The factual information produced by ChatGPT holds potential as a valuable resource, however, it is imperative that the responses undergo rigorous assessment for accuracy and clinical applicability under the scrutiny of clinical pharmacists.

Sincerely,

Merel van Nuland

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
关于临床药学的 ChatGPT 和事实知识问题:回应致编辑的信。
亲爱的编辑,围绕题为《ChatGPT 在临床药学事实知识问题上的表现》一文的讨论值得进一步研究和批评。该研究评估了 ChatGPT 在回答有关临床药学的事实性知识问题时的有效性。通过一系列 264 个问题,对 ChatGPT 回答的准确性、一致性、证据质量和可重复性进行了分析,结果令人瞩目。ChatGPT 的正确率达到了 79%,超过了药剂师 66% 的正确率。在承认讨论部分所述的局限性的同时,有必要指出本研究仅关注事实性知识问题。主要目的是确定 ChatGPT 在回答事实性知识问题时的表现,而不是其临床推理的熟练程度。因此,本研究没有就 ChatGPT 对临床决策的影响下结论,因为这方面的问题属于另一项研究工作的范围。3 此外,我们的问题所涵盖的主题范围被认为能够代表药剂师在临床药学领域的事实知识基础。作者承认有必要进一步研究 ChatGPT 的临床适用性,例如,进行纵向研究。此外,探索 ChatGPT 为其反应提供理由和解释的能力可以增强其在药剂师决策过程中的辅助功效。要加强 ChatGPT 作为药剂师临床工具的功能,就必须不断完善和改进 ChatGPT。尽管如此,临床药剂师不可或缺的专业知识和解释技能仍是在临床中应用这些信息的关键。ChatGPT 生成的事实信息具有作为宝贵资源的潜力,但是,必须在临床药剂师的监督下对回复的准确性和临床适用性进行严格的评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Issue Information Association of Gene Polymorphisms and Serum Levels of ALAS1 with the Risk of Anti‐Tuberculosis Drug‐Induced Liver Injury Single‐Dose Tolerability and Pharmacokinetics of Onradivir in Chinese Patients with Hepatic Impairment and Healthy Matched Controls Effect of Food on the Pharmacokinetics, Safety, and Tolerability of Budesonide Oral Suspension in Healthy Adult Participants: A Randomized Phase 1 Study Population Pharmacokinetics of Xeligekimab: An Anti‐IL‐17A Monoclonal Antibody, in Patients with Moderate to Severe Plaque Psoriasis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1