Arya Rao, Andrew Mu, Elizabeth Enichen, Dhruva Gupta, Nathan Hall, Erica Koranteng, William Marks, Michael J Senter-Zapata, David C Whitehead, Benjamin A White, Sanjay Saini, Adam B Landman, Marc D Succi
{"title":"A Future of Self-Directed Patient Internet Research: Large Language Model-Based Tools Versus Standard Search Engines.","authors":"Arya Rao, Andrew Mu, Elizabeth Enichen, Dhruva Gupta, Nathan Hall, Erica Koranteng, William Marks, Michael J Senter-Zapata, David C Whitehead, Benjamin A White, Sanjay Saini, Adam B Landman, Marc D Succi","doi":"10.1007/s10439-025-03701-6","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>As generalist large language models (LLMs) become more commonplace, patients will inevitably increasingly turn to these tools instead of traditional search engines. Here, we evaluate publicly available LLM-based chatbots as tools for patient education through physician review of responses provided by Google, Bard, GPT-3.5 and GPT-4 to commonly searched queries about prevalent chronic health conditions in the United States.</p><p><strong>Methods: </strong>Five distinct commonly Google-searched queries were selected for (i) hypertension, (ii) hyperlipidemia, (iii) diabetes, (iv) anxiety, and (v) mood disorders and prompted into each model of interest. Responses were assessed by board-certified physicians for accuracy, comprehensiveness, and overall quality on a five-point Likert scale. The Flesch-Kincaid Grade Levels were calculated to assess readability.</p><p><strong>Results: </strong>GPT-3.5 (4.40 ± 0.48, 4.29 ± 0.43) and GPT-4 (4.35 ± 0.30, 4.24 ± 0.28) received higher ratings in comprehensiveness and quality than Bard (3.79 ± 0.36, 3.87 ± 0.32) and Google (1.87 ± 0.42, 2.11 ± 0.47), all p < 0.05. However, Bard (9.45 ± 1.35) and Google responses (9.92 ± 5.31) had a lower average Flesch-Kincaid Grade Level compared to GPT-3.5 (14.69 ± 1.57) and GPT-4 (12.88 ± 2.02), indicating greater readability.</p><p><strong>Conclusion: </strong>This study suggests that publicly available LLM-based tools may provide patients with more accurate responses to queries on chronic health conditions than answers provided by Google search. These results provide support for the use of these tools in place of traditional search engines for health-related queries.</p>","PeriodicalId":7986,"journal":{"name":"Annals of Biomedical Engineering","volume":" ","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s10439-025-03701-6","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: As generalist large language models (LLMs) become more commonplace, patients will inevitably increasingly turn to these tools instead of traditional search engines. Here, we evaluate publicly available LLM-based chatbots as tools for patient education through physician review of responses provided by Google, Bard, GPT-3.5 and GPT-4 to commonly searched queries about prevalent chronic health conditions in the United States.
Methods: Five distinct commonly Google-searched queries were selected for (i) hypertension, (ii) hyperlipidemia, (iii) diabetes, (iv) anxiety, and (v) mood disorders and prompted into each model of interest. Responses were assessed by board-certified physicians for accuracy, comprehensiveness, and overall quality on a five-point Likert scale. The Flesch-Kincaid Grade Levels were calculated to assess readability.
Results: GPT-3.5 (4.40 ± 0.48, 4.29 ± 0.43) and GPT-4 (4.35 ± 0.30, 4.24 ± 0.28) received higher ratings in comprehensiveness and quality than Bard (3.79 ± 0.36, 3.87 ± 0.32) and Google (1.87 ± 0.42, 2.11 ± 0.47), all p < 0.05. However, Bard (9.45 ± 1.35) and Google responses (9.92 ± 5.31) had a lower average Flesch-Kincaid Grade Level compared to GPT-3.5 (14.69 ± 1.57) and GPT-4 (12.88 ± 2.02), indicating greater readability.
Conclusion: This study suggests that publicly available LLM-based tools may provide patients with more accurate responses to queries on chronic health conditions than answers provided by Google search. These results provide support for the use of these tools in place of traditional search engines for health-related queries.
期刊介绍:
Annals of Biomedical Engineering is an official journal of the Biomedical Engineering Society, publishing original articles in the major fields of bioengineering and biomedical engineering. The Annals is an interdisciplinary and international journal with the aim to highlight integrated approaches to the solutions of biological and biomedical problems.