Objective: To evaluate the performance of large language models (LLMs), specifically Microsoft Copilot, GPT-4 (GPT-4o and GPT-4o mini), and Google Gemini (Gemini and Gemini Advanced), in answering ophthalmological questions and assessing the impact of prompting techniques on their accuracy.
Design: Prospective qualitative study.
Participants: Microsoft Copilot, GPT-4 (GPT-4o and GPT-4o mini), and Google Gemini (Gemini and Gemini Advanced).
Methods: A total of 300 ophthalmological questions from StatPearls were tested, covering a range of subspecialties and image-based tasks. Each question was evaluated using 2 prompting techniques: zero-shot forced prompting (prompt 1) and combined role-based and zero-shot plan-and-solve+ prompting (prompt 2).
Results: With zero-shot forced prompting, GPT-4o demonstrated significantly superior overall performance, correctly answering 72.3% of questions and outperforming all other models, including Copilot (53.7%), GPT-4o mini (62.0%), Gemini (54.3%), and Gemini Advanced (62.0%) (p < 0.0001). Both Copilot and GPT-4o showed notable improvements with Prompt 2 over Prompt 1, elevating Copilot's accuracy from the lowest (53.7%) to the second highest (72.3%) among the evaluated LLMs.
Conclusions: While newer iterations of LLMs, such as GPT-4o and Gemini Advanced, outperformed their less advanced counterparts (GPT-4o mini and Gemini), this study emphasizes the need for caution in clinical applications of these models. The choice of prompting techniques significantly influences performance, highlighting the necessity for further research to refine LLMs capabilities, particularly in visual data interpretation, to ensure their safe integration into medical practice.