研究大型语言模型在屈光手术问题中的作用。

IF 3.7 2区 医学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Medical Informatics Pub Date : 2025-01-06 DOI:10.1016/j.ijmedinf.2025.105787
Suleyman Demir
{"title":"研究大型语言模型在屈光手术问题中的作用。","authors":"Suleyman Demir","doi":"10.1016/j.ijmedinf.2025.105787","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Large language models (LLMs) are becoming increasingly popular and are playing an important role in providing accurate clinical information to both patients and physicians. This study aimed to investigate the effectiveness of ChatGPT-4.0, Google Gemini, and Microsoft Copilot LLMs for responding to patient questions regarding refractive surgery.</div></div><div><h3>Methods</h3><div>The LLMs’ responses to 25 questions about refractive surgery, which are frequently asked by patients, were evaluated by two ophthalmologists using a 5-point Likert scale, with scores ranging from 1 to 5. Furthermore, the DISCERN scale was used to assess the reliability of the language models’ responses, whereas the Flesch Reading Ease and Flesch–Kincaid Grade Level indices were used to evaluate readability.</div></div><div><h3>Results</h3><div>Significant differences were found among all three LLMs in the Likert scores (p = 0.022). Pairwise comparisons revealed that ChatGPT-4.0′s Likert score was significantly higher than that of Microsoft Copilot, while no significant difference was found when compared to Google Gemini (p = 0.005 and p = 0.087, respectively). In terms of reliability, ChatGPT-4.0 stood out, receiving the highest DISCERN scores among the three LLMs. However, in terms of readability, ChatGPT-4.0 received the lowest score.</div></div><div><h3>Conclusions</h3><div>ChatGPT-4.0′s responses to inquiries regarding refractive surgery were more intricate for patients compared to other language models; however, the information provided was more dependable and accurate.</div></div>","PeriodicalId":54950,"journal":{"name":"International Journal of Medical Informatics","volume":"195 ","pages":"Article 105787"},"PeriodicalIF":3.7000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Investigating the role of large language models on questions about refractive surgery\",\"authors\":\"Suleyman Demir\",\"doi\":\"10.1016/j.ijmedinf.2025.105787\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><div>Large language models (LLMs) are becoming increasingly popular and are playing an important role in providing accurate clinical information to both patients and physicians. This study aimed to investigate the effectiveness of ChatGPT-4.0, Google Gemini, and Microsoft Copilot LLMs for responding to patient questions regarding refractive surgery.</div></div><div><h3>Methods</h3><div>The LLMs’ responses to 25 questions about refractive surgery, which are frequently asked by patients, were evaluated by two ophthalmologists using a 5-point Likert scale, with scores ranging from 1 to 5. Furthermore, the DISCERN scale was used to assess the reliability of the language models’ responses, whereas the Flesch Reading Ease and Flesch–Kincaid Grade Level indices were used to evaluate readability.</div></div><div><h3>Results</h3><div>Significant differences were found among all three LLMs in the Likert scores (p = 0.022). Pairwise comparisons revealed that ChatGPT-4.0′s Likert score was significantly higher than that of Microsoft Copilot, while no significant difference was found when compared to Google Gemini (p = 0.005 and p = 0.087, respectively). In terms of reliability, ChatGPT-4.0 stood out, receiving the highest DISCERN scores among the three LLMs. However, in terms of readability, ChatGPT-4.0 received the lowest score.</div></div><div><h3>Conclusions</h3><div>ChatGPT-4.0′s responses to inquiries regarding refractive surgery were more intricate for patients compared to other language models; however, the information provided was more dependable and accurate.</div></div>\",\"PeriodicalId\":54950,\"journal\":{\"name\":\"International Journal of Medical Informatics\",\"volume\":\"195 \",\"pages\":\"Article 105787\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-01-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Medical Informatics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1386505625000048\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1386505625000048","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

背景:大型语言模型(llm)正变得越来越流行,并在为患者和医生提供准确的临床信息方面发挥着重要作用。本研究旨在调查ChatGPT-4.0、谷歌Gemini和Microsoft Copilot llm在回答患者关于屈光手术的问题方面的有效性。方法:两位眼科医生采用李克特5分制对法学硕士对患者常见的25个屈光手术问题的回答进行评估,评分范围为1 ~ 5分。此外,我们使用DISCERN量表来评估语言模型回答的可靠性,而使用Flesch Reading Ease和Flesch- kincaid Grade Level指数来评估可读性。结果:三种LLMs的Likert评分差异有统计学意义(p = 0.022)。两两比较发现,ChatGPT-4.0的Likert评分显著高于Microsoft Copilot,而与谷歌Gemini相比无显著差异(p = 0.005和p = 0.087)。在可靠性方面,ChatGPT-4.0脱颖而出,在三个法学硕士中获得最高的DISCERN分数。然而,在可读性方面,ChatGPT-4.0得分最低。结论:与其他语言模型相比,ChatGPT-4.0对患者关于屈光手术的询问的回答更为复杂;然而,提供的信息更加可靠和准确。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Investigating the role of large language models on questions about refractive surgery

Background

Large language models (LLMs) are becoming increasingly popular and are playing an important role in providing accurate clinical information to both patients and physicians. This study aimed to investigate the effectiveness of ChatGPT-4.0, Google Gemini, and Microsoft Copilot LLMs for responding to patient questions regarding refractive surgery.

Methods

The LLMs’ responses to 25 questions about refractive surgery, which are frequently asked by patients, were evaluated by two ophthalmologists using a 5-point Likert scale, with scores ranging from 1 to 5. Furthermore, the DISCERN scale was used to assess the reliability of the language models’ responses, whereas the Flesch Reading Ease and Flesch–Kincaid Grade Level indices were used to evaluate readability.

Results

Significant differences were found among all three LLMs in the Likert scores (p = 0.022). Pairwise comparisons revealed that ChatGPT-4.0′s Likert score was significantly higher than that of Microsoft Copilot, while no significant difference was found when compared to Google Gemini (p = 0.005 and p = 0.087, respectively). In terms of reliability, ChatGPT-4.0 stood out, receiving the highest DISCERN scores among the three LLMs. However, in terms of readability, ChatGPT-4.0 received the lowest score.

Conclusions

ChatGPT-4.0′s responses to inquiries regarding refractive surgery were more intricate for patients compared to other language models; however, the information provided was more dependable and accurate.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Medical Informatics
International Journal of Medical Informatics 医学-计算机:信息系统
CiteScore
8.90
自引率
4.10%
发文量
217
审稿时长
42 days
期刊介绍: International Journal of Medical Informatics provides an international medium for dissemination of original results and interpretative reviews concerning the field of medical informatics. The Journal emphasizes the evaluation of systems in healthcare settings. The scope of journal covers: Information systems, including national or international registration systems, hospital information systems, departmental and/or physician''s office systems, document handling systems, electronic medical record systems, standardization, systems integration etc.; Computer-aided medical decision support systems using heuristic, algorithmic and/or statistical methods as exemplified in decision theory, protocol development, artificial intelligence, etc. Educational computer based programs pertaining to medical informatics or medicine in general; Organizational, economic, social, clinical impact, ethical and cost-benefit aspects of IT applications in health care.
期刊最新文献
Machine learning for predicting outcomes of transcatheter aortic valve implantation: A systematic review AI-driven triage in emergency departments: A review of benefits, challenges, and future directions Predicting cancer survival at different stages: Insights from fair and explainable machine learning approaches The fading structural prominence of explanations in clinical studies Utilization, challenges, and training needs of digital health technologies: Perspectives from healthcare professionals
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1