{"title":"研究先进的大型语言模型在生成患者指南和患者教育材料方面的能力。","authors":"Kannan Sridharan, Gowri Sivaramakrishnan","doi":"10.1136/ejhpharm-2024-004245","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Large language models (LLMs) with advanced language generation capabilities have the potential to enhance patient interactions. This study evaluates the effectiveness of ChatGPT 4.0 and Gemini 1.0 Pro in providing patient instructions and creating patient educational material (PEM).</p><p><strong>Methods: </strong>A cross-sectional study employed ChatGPT 4.0 and Gemini 1.0 Pro across six medical scenarios using simple and detailed prompts. The Patient Education Materials Assessment Tool for Print materials (PEMAT-P) evaluated the understandability, actionability, and readability of the outputs.</p><p><strong>Results: </strong>LLMs provided consistent responses, especially regarding drug information, therapeutic goals, administration, common side effects, and interactions. However, they lacked guidance on expiration dates and proper medication disposal. Detailed prompts yielded comprehensible outputs for the average adult. ChatGPT 4.0 had mean understandability and actionability scores of 80% and 60%, respectively, compared with 67% and 60% for Gemini 1.0 Pro. ChatGPT 4.0 produced longer outputs, achieving 85% readability with detailed prompts, while Gemini 1.0 Pro maintained consistent readability. Simple prompts resulted in ChatGPT 4.0 outputs at a 10th-grade reading level, while Gemini 1.0 Pro outputs were at a 7th-grade level. Both LLMs produced outputs at a 6th-grade level with detailed prompts.</p><p><strong>Conclusion: </strong>LLMs show promise in generating patient instructions and PEM. However, healthcare professional oversight and patient education on LLM use are essential for effective implementation.</p>","PeriodicalId":12050,"journal":{"name":"European journal of hospital pharmacy : science and practice","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Investigating the capabilities of advanced large language models in generating patient instructions and patient educational material.\",\"authors\":\"Kannan Sridharan, Gowri Sivaramakrishnan\",\"doi\":\"10.1136/ejhpharm-2024-004245\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Large language models (LLMs) with advanced language generation capabilities have the potential to enhance patient interactions. This study evaluates the effectiveness of ChatGPT 4.0 and Gemini 1.0 Pro in providing patient instructions and creating patient educational material (PEM).</p><p><strong>Methods: </strong>A cross-sectional study employed ChatGPT 4.0 and Gemini 1.0 Pro across six medical scenarios using simple and detailed prompts. The Patient Education Materials Assessment Tool for Print materials (PEMAT-P) evaluated the understandability, actionability, and readability of the outputs.</p><p><strong>Results: </strong>LLMs provided consistent responses, especially regarding drug information, therapeutic goals, administration, common side effects, and interactions. However, they lacked guidance on expiration dates and proper medication disposal. Detailed prompts yielded comprehensible outputs for the average adult. ChatGPT 4.0 had mean understandability and actionability scores of 80% and 60%, respectively, compared with 67% and 60% for Gemini 1.0 Pro. ChatGPT 4.0 produced longer outputs, achieving 85% readability with detailed prompts, while Gemini 1.0 Pro maintained consistent readability. Simple prompts resulted in ChatGPT 4.0 outputs at a 10th-grade reading level, while Gemini 1.0 Pro outputs were at a 7th-grade level. Both LLMs produced outputs at a 6th-grade level with detailed prompts.</p><p><strong>Conclusion: </strong>LLMs show promise in generating patient instructions and PEM. However, healthcare professional oversight and patient education on LLM use are essential for effective implementation.</p>\",\"PeriodicalId\":12050,\"journal\":{\"name\":\"European journal of hospital pharmacy : science and practice\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European journal of hospital pharmacy : science and practice\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1136/ejhpharm-2024-004245\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PHARMACOLOGY & PHARMACY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European journal of hospital pharmacy : science and practice","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1136/ejhpharm-2024-004245","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PHARMACOLOGY & PHARMACY","Score":null,"Total":0}
Investigating the capabilities of advanced large language models in generating patient instructions and patient educational material.
Objectives: Large language models (LLMs) with advanced language generation capabilities have the potential to enhance patient interactions. This study evaluates the effectiveness of ChatGPT 4.0 and Gemini 1.0 Pro in providing patient instructions and creating patient educational material (PEM).
Methods: A cross-sectional study employed ChatGPT 4.0 and Gemini 1.0 Pro across six medical scenarios using simple and detailed prompts. The Patient Education Materials Assessment Tool for Print materials (PEMAT-P) evaluated the understandability, actionability, and readability of the outputs.
Results: LLMs provided consistent responses, especially regarding drug information, therapeutic goals, administration, common side effects, and interactions. However, they lacked guidance on expiration dates and proper medication disposal. Detailed prompts yielded comprehensible outputs for the average adult. ChatGPT 4.0 had mean understandability and actionability scores of 80% and 60%, respectively, compared with 67% and 60% for Gemini 1.0 Pro. ChatGPT 4.0 produced longer outputs, achieving 85% readability with detailed prompts, while Gemini 1.0 Pro maintained consistent readability. Simple prompts resulted in ChatGPT 4.0 outputs at a 10th-grade reading level, while Gemini 1.0 Pro outputs were at a 7th-grade level. Both LLMs produced outputs at a 6th-grade level with detailed prompts.
Conclusion: LLMs show promise in generating patient instructions and PEM. However, healthcare professional oversight and patient education on LLM use are essential for effective implementation.
期刊介绍:
European Journal of Hospital Pharmacy (EJHP) offers a high quality, peer-reviewed platform for the publication of practical and innovative research which aims to strengthen the profile and professional status of hospital pharmacists. EJHP is committed to being the leading journal on all aspects of hospital pharmacy, thereby advancing the science, practice and profession of hospital pharmacy. The journal aims to become a major source for education and inspiration to improve practice and the standard of patient care in hospitals and related institutions worldwide.
EJHP is the only official journal of the European Association of Hospital Pharmacists.