{"title":"医疗咨询中大语言模型的比较分析:聚焦幽门螺旋杆菌感染","authors":"Qing-Zhou Kong, Kun-Ping Ju, Meng Wan, Jing Liu, Xiao-Qi Wu, Yue-Yue Li, Xiu-Li Zuo, Yan-Qing Li","doi":"10.1111/hel.13055","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Large language models (LLMs) are promising medical counseling tools, but the reliability of responses remains unclear. We aimed to assess the feasibility of three popular LLMs as counseling tools for <i>Helicobacter pylori</i> infection in different counseling languages.</p>\n </section>\n \n <section>\n \n <h3> Materials and Methods</h3>\n \n <p>This study was conducted between November 20 and December 1, 2023. Three large language models (ChatGPT 4.0 [LLM1], ChatGPT 3.5 [LLM2], and ERNIE Bot 4.0 [LLM3]) were input 15 <i>H. pylori</i> related questions each, once in English and once in Chinese. Each chat was conducted using the “New Chat” function to avoid bias from correlation interference. Responses were recorded and blindly assigned to three reviewers for scoring on three established Likert scales: accuracy (ranged 1–6 point), completeness (ranged 1–3 point), and comprehensibility (ranged 1–3 point). The acceptable thresholds for the scales were set at a minimum of 4, 2, and 2, respectively. Final various source and interlanguage comparisons were made.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The overall mean (SD) accuracy score was 4.80 (1.02), while 1.82 (0.78) for completeness score and 2.90 (0.36) for comprehensibility score. The acceptable proportions for the accuracy, completeness, and comprehensibility of the responses were 90%, 45.6%, and 100%, respectively. The acceptable proportion of overall completeness score for English responses was better than for Chinese responses (<i>p</i> = 0.034). For accuracy, the English responses of LLM3 were better than the Chinese responses (<i>p</i> = 0.0055). As for completeness, the English responses of LLM1 was better than the Chinese responses (<i>p</i> = 0.0257). For comprehensibility, the English responses of LLM1 was better than the Chinese responses (<i>p</i> = 0.0496). No differences were found between the various LLMs.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>The LLMs responded satisfactorily to questions related to <i>H. pylori</i> infection. But further improving completeness and reliability, along with considering language nuances, is crucial for optimizing overall performance.</p>\n </section>\n </div>","PeriodicalId":13223,"journal":{"name":"Helicobacter","volume":"29 1","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparative analysis of large language models in medical counseling: A focus on Helicobacter pylori infection\",\"authors\":\"Qing-Zhou Kong, Kun-Ping Ju, Meng Wan, Jing Liu, Xiao-Qi Wu, Yue-Yue Li, Xiu-Li Zuo, Yan-Qing Li\",\"doi\":\"10.1111/hel.13055\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Large language models (LLMs) are promising medical counseling tools, but the reliability of responses remains unclear. We aimed to assess the feasibility of three popular LLMs as counseling tools for <i>Helicobacter pylori</i> infection in different counseling languages.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Materials and Methods</h3>\\n \\n <p>This study was conducted between November 20 and December 1, 2023. Three large language models (ChatGPT 4.0 [LLM1], ChatGPT 3.5 [LLM2], and ERNIE Bot 4.0 [LLM3]) were input 15 <i>H. pylori</i> related questions each, once in English and once in Chinese. Each chat was conducted using the “New Chat” function to avoid bias from correlation interference. Responses were recorded and blindly assigned to three reviewers for scoring on three established Likert scales: accuracy (ranged 1–6 point), completeness (ranged 1–3 point), and comprehensibility (ranged 1–3 point). The acceptable thresholds for the scales were set at a minimum of 4, 2, and 2, respectively. Final various source and interlanguage comparisons were made.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>The overall mean (SD) accuracy score was 4.80 (1.02), while 1.82 (0.78) for completeness score and 2.90 (0.36) for comprehensibility score. The acceptable proportions for the accuracy, completeness, and comprehensibility of the responses were 90%, 45.6%, and 100%, respectively. The acceptable proportion of overall completeness score for English responses was better than for Chinese responses (<i>p</i> = 0.034). For accuracy, the English responses of LLM3 were better than the Chinese responses (<i>p</i> = 0.0055). As for completeness, the English responses of LLM1 was better than the Chinese responses (<i>p</i> = 0.0257). For comprehensibility, the English responses of LLM1 was better than the Chinese responses (<i>p</i> = 0.0496). No differences were found between the various LLMs.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>The LLMs responded satisfactorily to questions related to <i>H. pylori</i> infection. But further improving completeness and reliability, along with considering language nuances, is crucial for optimizing overall performance.</p>\\n </section>\\n </div>\",\"PeriodicalId\":13223,\"journal\":{\"name\":\"Helicobacter\",\"volume\":\"29 1\",\"pages\":\"\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-02-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Helicobacter\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/hel.13055\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GASTROENTEROLOGY & HEPATOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Helicobacter","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/hel.13055","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GASTROENTEROLOGY & HEPATOLOGY","Score":null,"Total":0}
Comparative analysis of large language models in medical counseling: A focus on Helicobacter pylori infection
Background
Large language models (LLMs) are promising medical counseling tools, but the reliability of responses remains unclear. We aimed to assess the feasibility of three popular LLMs as counseling tools for Helicobacter pylori infection in different counseling languages.
Materials and Methods
This study was conducted between November 20 and December 1, 2023. Three large language models (ChatGPT 4.0 [LLM1], ChatGPT 3.5 [LLM2], and ERNIE Bot 4.0 [LLM3]) were input 15 H. pylori related questions each, once in English and once in Chinese. Each chat was conducted using the “New Chat” function to avoid bias from correlation interference. Responses were recorded and blindly assigned to three reviewers for scoring on three established Likert scales: accuracy (ranged 1–6 point), completeness (ranged 1–3 point), and comprehensibility (ranged 1–3 point). The acceptable thresholds for the scales were set at a minimum of 4, 2, and 2, respectively. Final various source and interlanguage comparisons were made.
Results
The overall mean (SD) accuracy score was 4.80 (1.02), while 1.82 (0.78) for completeness score and 2.90 (0.36) for comprehensibility score. The acceptable proportions for the accuracy, completeness, and comprehensibility of the responses were 90%, 45.6%, and 100%, respectively. The acceptable proportion of overall completeness score for English responses was better than for Chinese responses (p = 0.034). For accuracy, the English responses of LLM3 were better than the Chinese responses (p = 0.0055). As for completeness, the English responses of LLM1 was better than the Chinese responses (p = 0.0257). For comprehensibility, the English responses of LLM1 was better than the Chinese responses (p = 0.0496). No differences were found between the various LLMs.
Conclusions
The LLMs responded satisfactorily to questions related to H. pylori infection. But further improving completeness and reliability, along with considering language nuances, is crucial for optimizing overall performance.
期刊介绍:
Helicobacter is edited by Professor David Y Graham. The editorial and peer review process is an independent process. Whenever there is a conflict of interest, the editor and editorial board will declare their interests and affiliations. Helicobacter recognises the critical role that has been established for Helicobacter pylori in peptic ulcer, gastric adenocarcinoma, and primary gastric lymphoma. As new helicobacter species are now regularly being discovered, Helicobacter covers the entire range of helicobacter research, increasing communication among the fields of gastroenterology; microbiology; vaccine development; laboratory animal science.