{"title":"ChatGPT and Factual Knowledge Questions Regarding Clinical Pharmacy: Correspondence","authors":"Hinpetch Daungsupawong PhD, Viroj Wiwanitkit MD","doi":"10.1002/jcph.2479","DOIUrl":null,"url":null,"abstract":"<p>Dear Editor,</p><p>The article “Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy” is the topic of present discussion in this letter.<span><sup>1</sup></span> In this work, the researchers evaluated ChatGPT's ability to respond to factual knowledge inquiries regarding clinical pharmacy using a language model trained on medical literature. ChatGPT was asked 264 questions in all, and its answers were assessed for accuracy, consistency, substantiation quality, and repeatability. According to the findings, ChatGPT answered 79% of the questions correctly, outperforming pharmacists' accuracy rate of 66%. The agreement between ChatGPT's answers and the right answers was 95%. The fact that ChatGPT's performance was assessed using only 264 questions is one of the study's weaknesses. This might not adequately convey the limitations and strengths of the approach for a wider range of clinical pharmacy subjects. Furthermore, the study only included factual knowledge questions, which might not accurately capture the subtleties and complexities that are frequently present in clinical practice. Additionally, there might have been biases in the questions chosen or the standards of evaluation that the researchers employed. The lack of variety in the questions that are sent to ChatGPT and the possibility of irregularities in the independent pharmacists' evaluation of the substantiation's quality are two specific methodological shortcomings. Furthermore, when applying clinical pharmacy knowledge to real-world circumstances, ChatGPT's interpretative or reasoning abilities were not examined in this study. These elements are necessary for a thorough assessment of ChatGPT's usefulness in clinical settings. Extending the dataset of questions to include a greater variety of clinical pharmacy issues, including more intricate and nuanced scenarios, may be one of the research's future approaches. Furthermore, more research into ChatGPT's capacity to offer justifications and explanations for its conclusions might improve the tool's suitability for helping pharmacists make decisions. Studies with a longitudinal design could investigate ChatGPT's long-term effectiveness and evaluate how it affects clinical outcomes in pharmacy practice. Continuous upgrades and enhancements to ChatGPT might increase its functionality and solidify its position as a trustworthy resource for pharmacists as the technology advances.</p><p>Hinpetch Daungsupawong: 50% ideas; writing; analyzing; approval. Viroj Wiwanitkit: 50 % ideas; supervision; approval.</p><p>The authors declare no conflicts of interest.</p>","PeriodicalId":22751,"journal":{"name":"The Journal of Clinical Pharmacology","volume":"64 9","pages":"1185"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jcph.2479","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Clinical Pharmacology","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jcph.2479","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Dear Editor,
The article “Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy” is the topic of present discussion in this letter.1 In this work, the researchers evaluated ChatGPT's ability to respond to factual knowledge inquiries regarding clinical pharmacy using a language model trained on medical literature. ChatGPT was asked 264 questions in all, and its answers were assessed for accuracy, consistency, substantiation quality, and repeatability. According to the findings, ChatGPT answered 79% of the questions correctly, outperforming pharmacists' accuracy rate of 66%. The agreement between ChatGPT's answers and the right answers was 95%. The fact that ChatGPT's performance was assessed using only 264 questions is one of the study's weaknesses. This might not adequately convey the limitations and strengths of the approach for a wider range of clinical pharmacy subjects. Furthermore, the study only included factual knowledge questions, which might not accurately capture the subtleties and complexities that are frequently present in clinical practice. Additionally, there might have been biases in the questions chosen or the standards of evaluation that the researchers employed. The lack of variety in the questions that are sent to ChatGPT and the possibility of irregularities in the independent pharmacists' evaluation of the substantiation's quality are two specific methodological shortcomings. Furthermore, when applying clinical pharmacy knowledge to real-world circumstances, ChatGPT's interpretative or reasoning abilities were not examined in this study. These elements are necessary for a thorough assessment of ChatGPT's usefulness in clinical settings. Extending the dataset of questions to include a greater variety of clinical pharmacy issues, including more intricate and nuanced scenarios, may be one of the research's future approaches. Furthermore, more research into ChatGPT's capacity to offer justifications and explanations for its conclusions might improve the tool's suitability for helping pharmacists make decisions. Studies with a longitudinal design could investigate ChatGPT's long-term effectiveness and evaluate how it affects clinical outcomes in pharmacy practice. Continuous upgrades and enhancements to ChatGPT might increase its functionality and solidify its position as a trustworthy resource for pharmacists as the technology advances.