Ole Kl Helgestad, Astrid J Hjelholt, Søren V Vestergaard, Samuel Azuz, Eva A Sædder, Thure F Overvad
{"title":"ChatGPT versus physician-derived answers to drug-related questions.","authors":"Ole Kl Helgestad, Astrid J Hjelholt, Søren V Vestergaard, Samuel Azuz, Eva A Sædder, Thure F Overvad","doi":"10.61409/A05240360","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Large language models have recently gained interest within the medical community. Their clinical impact is currently being investigated, with potential application in pharmaceutical counselling, which has yet to be assessed.</p><p><strong>Methods: </strong>We performed a retrospective investigation of ChatGPT 3.5 and 4.0 in response to 49 consecutive inquiries encountered in the joint pharmaceutical counselling service of the Central and North Denmark regions. Answers were rated by comparing them with the answers generated by physicians.</p><p><strong>Results: </strong>ChatGPT 3.5 and 4.0 provided answers rated better or equal in 39 (80%) and 48 (98%) cases, respectively, compared to the pharmaceutical counselling service. References did not accompany answers from ChatGPT, and ChatGPT did not elaborate on what would be considered most clinically relevant when providing multiple answers.</p><p><strong>Conclusions: </strong>In drug-related questions, ChatGPT (4.0) provided answers of a reasonably high quality. The lack of references and an occasionally limited clinical interpretation makes it less useful as a primary source of information.</p><p><strong>Funding: </strong>None.</p><p><strong>Trial registration: </strong>Not relevant.</p>","PeriodicalId":11119,"journal":{"name":"Danish medical journal","volume":"72 1","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Danish medical journal","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.61409/A05240360","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Large language models have recently gained interest within the medical community. Their clinical impact is currently being investigated, with potential application in pharmaceutical counselling, which has yet to be assessed.
Methods: We performed a retrospective investigation of ChatGPT 3.5 and 4.0 in response to 49 consecutive inquiries encountered in the joint pharmaceutical counselling service of the Central and North Denmark regions. Answers were rated by comparing them with the answers generated by physicians.
Results: ChatGPT 3.5 and 4.0 provided answers rated better or equal in 39 (80%) and 48 (98%) cases, respectively, compared to the pharmaceutical counselling service. References did not accompany answers from ChatGPT, and ChatGPT did not elaborate on what would be considered most clinically relevant when providing multiple answers.
Conclusions: In drug-related questions, ChatGPT (4.0) provided answers of a reasonably high quality. The lack of references and an occasionally limited clinical interpretation makes it less useful as a primary source of information.
期刊介绍:
The Danish Medical Journal (DMJ) is a general medical journal. The journal publish original research in English – conducted in or in relation to the Danish health-care system. When writing for the Danish Medical Journal please remember target audience which is the general reader. This means that the research area should be relevant to many readers and the paper should be presented in a way that most readers will understand the content.
DMJ will publish the following articles:
• Original articles
• Protocol articles from large randomized clinical trials
• Systematic reviews and meta-analyses
• PhD theses from Danish faculties of health sciences
• DMSc theses from Danish faculties of health sciences.