Performance of Large Language Models ChatGPT and Gemini on Workplace Management Questions in Radiology.

IF 3.3 3区 医学 Q1 MEDICINE, GENERAL & INTERNAL Diagnostics Pub Date : 2025-02-19 DOI:10.3390/diagnostics15040497
Patricia Leutz-Schmidt, Viktoria Palm, René Michael Mathy, Martin Grözinger, Hans-Ulrich Kauczor, Hyungseok Jang, Sam Sedaghat
{"title":"Performance of Large Language Models ChatGPT and Gemini on Workplace Management Questions in Radiology.","authors":"Patricia Leutz-Schmidt, Viktoria Palm, René Michael Mathy, Martin Grözinger, Hans-Ulrich Kauczor, Hyungseok Jang, Sam Sedaghat","doi":"10.3390/diagnostics15040497","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background/Objectives</b>: Despite the growing popularity of large language models (LLMs), there remains a notable lack of research examining their role in workplace management. This study aimed to address this gap by evaluating the performance of ChatGPT-3.5, ChatGPT-4.0, Gemini, and Gemini Advanced as famous LLMs in responding to workplace management questions specific to radiology. <b>Methods:</b> ChatGPT-3.5 and ChatGPT-4.0 (both OpenAI, San Francisco, CA, USA) and Gemini and Gemini Advanced (both Google Deep Mind, Mountain View, CA, USA) generated answers to 31 pre-selected questions on four different areas of workplace management in radiology: (1) patient management, (2) imaging and radiation management, (3) learning and personal development, and (4) administrative and department management. Two readers independently evaluated the answers provided by the LLM chatbots. Three 4-point scores were used to assess the quality of the responses: (1) overall quality score (OQS), (2) understandabilityscore (US), and (3) implementability score (IS). The mean quality score (MQS) was calculated from these three scores. <b>Results:</b> The overall inter-rater reliability (IRR) was good for Gemini Advanced (IRR 79%), Gemini (IRR 78%), and ChatGPT-3.5 (IRR 65%), and moderate for ChatGPT-4.0 (IRR 54%). The overall MQS averaged 3.36 (SD: 0.64) for ChatGPT-3.5, 3.75 (SD: 0.43) for ChatGPT-4.0, 3.29 (SD: 0.64) for Gemini, and 3.51 (SD: 0.53) for Gemini Advanced. The highest OQS, US, IS, and MQS were achieved by ChatGPT-4.0 in all categories, followed by Gemini Advanced. ChatGPT-4.0 was the most consistently superior performer and outperformed all other chatbots (<i>p</i> < 0.001-0.002). Gemini Advanced performed significantly better than Gemini (<i>p</i> = 0.003) and showed a non-significant trend toward outperforming ChatGPT-3.5 (<i>p</i> = 0.056). ChatGPT-4.0 provided superior answers in most cases compared with the other LLM chatbots. None of the answers provided by the chatbots were rated \"insufficient\". <b>Conclusions:</b> All four LLM chatbots performed well on workplace management questions in radiology. ChatGPT-4.0 outperformed ChatGPT-3.5, Gemini, and Gemini Advanced. Our study revealed that LLMs have the potential to improve workplace management in radiology by assisting with various tasks, making these processes more efficient without requiring specialized management skills.</p>","PeriodicalId":11225,"journal":{"name":"Diagnostics","volume":"15 4","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11854386/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Diagnostics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3390/diagnostics15040497","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background/Objectives: Despite the growing popularity of large language models (LLMs), there remains a notable lack of research examining their role in workplace management. This study aimed to address this gap by evaluating the performance of ChatGPT-3.5, ChatGPT-4.0, Gemini, and Gemini Advanced as famous LLMs in responding to workplace management questions specific to radiology. Methods: ChatGPT-3.5 and ChatGPT-4.0 (both OpenAI, San Francisco, CA, USA) and Gemini and Gemini Advanced (both Google Deep Mind, Mountain View, CA, USA) generated answers to 31 pre-selected questions on four different areas of workplace management in radiology: (1) patient management, (2) imaging and radiation management, (3) learning and personal development, and (4) administrative and department management. Two readers independently evaluated the answers provided by the LLM chatbots. Three 4-point scores were used to assess the quality of the responses: (1) overall quality score (OQS), (2) understandabilityscore (US), and (3) implementability score (IS). The mean quality score (MQS) was calculated from these three scores. Results: The overall inter-rater reliability (IRR) was good for Gemini Advanced (IRR 79%), Gemini (IRR 78%), and ChatGPT-3.5 (IRR 65%), and moderate for ChatGPT-4.0 (IRR 54%). The overall MQS averaged 3.36 (SD: 0.64) for ChatGPT-3.5, 3.75 (SD: 0.43) for ChatGPT-4.0, 3.29 (SD: 0.64) for Gemini, and 3.51 (SD: 0.53) for Gemini Advanced. The highest OQS, US, IS, and MQS were achieved by ChatGPT-4.0 in all categories, followed by Gemini Advanced. ChatGPT-4.0 was the most consistently superior performer and outperformed all other chatbots (p < 0.001-0.002). Gemini Advanced performed significantly better than Gemini (p = 0.003) and showed a non-significant trend toward outperforming ChatGPT-3.5 (p = 0.056). ChatGPT-4.0 provided superior answers in most cases compared with the other LLM chatbots. None of the answers provided by the chatbots were rated "insufficient". Conclusions: All four LLM chatbots performed well on workplace management questions in radiology. ChatGPT-4.0 outperformed ChatGPT-3.5, Gemini, and Gemini Advanced. Our study revealed that LLMs have the potential to improve workplace management in radiology by assisting with various tasks, making these processes more efficient without requiring specialized management skills.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大型语言模型ChatGPT和Gemini在放射科工作场所管理问题上的表现。
背景/目的:尽管大型语言模型(llm)越来越受欢迎,但仍然缺乏研究它们在工作场所管理中的作用。本研究旨在通过评估ChatGPT-3.5、ChatGPT-4.0、Gemini和Gemini Advanced作为著名法学硕士在回答放射学工作场所管理问题方面的表现来解决这一差距。方法:ChatGPT-3.5和ChatGPT-4.0(均为OpenAI公司,San Francisco, CA, USA)以及Gemini和Gemini Advanced(均为b谷歌Deep Mind公司,Mountain View, CA, USA)针对放射学工作场所管理的四个不同领域(1)患者管理,(2)成像和放射管理,(3)学习和个人发展,以及(4)行政和部门管理)对31个预先选择的问题进行了回答。两位读者独立评估了LLM聊天机器人提供的答案。三个4分的分数用于评估回答的质量:(1)总体质量分数(OQS),(2)可理解性分数(US)和(3)可实施性分数(IS)。平均质量分数(MQS)由这三个分数计算。结果:Gemini Advanced (IRR 79%)、Gemini (IRR 78%)和ChatGPT-3.5 (IRR 65%)的整体评分者间信度(IRR)较好,ChatGPT-4.0 (IRR 54%)为中等。ChatGPT-3.5组总体MQS平均为3.36 (SD: 0.64), ChatGPT-4.0组为3.75 (SD: 0.43), Gemini组为3.29 (SD: 0.64), Gemini Advanced组为3.51 (SD: 0.53)。ChatGPT-4.0在所有类别中获得了最高的OQS, US, IS和MQS,其次是Gemini Advanced。ChatGPT-4.0的表现最为优异,优于所有其他聊天机器人(p < 0.001-0.002)。Gemini Advanced的表现明显优于Gemini (p = 0.003),优于ChatGPT-3.5的趋势不显著(p = 0.056)。与其他LLM聊天机器人相比,ChatGPT-4.0在大多数情况下提供了更好的答案。聊天机器人提供的答案没有一个被评为“不充分”。结论:所有四个LLM聊天机器人在放射学工作场所管理问题上表现良好。ChatGPT-4.0的表现优于ChatGPT-3.5、Gemini和Gemini Advanced。我们的研究表明,法学硕士有潜力通过协助各种任务来改善放射学的工作场所管理,使这些过程更有效,而不需要专门的管理技能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Diagnostics
Diagnostics Biochemistry, Genetics and Molecular Biology-Clinical Biochemistry
CiteScore
4.70
自引率
8.30%
发文量
2699
审稿时长
19.64 days
期刊介绍: Diagnostics (ISSN 2075-4418) is an international scholarly open access journal on medical diagnostics. It publishes original research articles, reviews, communications and short notes on the research and development of medical diagnostics. There is no restriction on the length of the papers. Our aim is to encourage scientists to publish their experimental and theoretical research in as much detail as possible. Full experimental and/or methodological details must be provided for research articles.
期刊最新文献
Evaluation of the Impact of Different Skeletal Orthodontic Anomalies on Condylar Asymmetry Using Cone-Beam Computed Tomography. Association Between Serum Testosterone Levels and Coronary Artery Stenosis: A Cross-Sectional Study in Central European Population. A Novel Dual-Modality Dual-View Hybrid Deep Learning-Machine Learning Framework for the Prediction of Carotid Plaque Vulnerability via Late Fusion. AI-Assisted OCT Imaging for Core Needle Biopsy Guidance: The 1st in Humans Study. Assessment of Optimal Stent Implantation with the Use of Optical Coherence Tomography in Patients with Coronary Artery Disease.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1