首页 > 最新文献

JMIR Medical Education最新文献

英文 中文
Topics and Trends of Health Informatics Education Research: Scientometric Analysis. 健康资讯教育研究的主题与趋势:科学计量分析。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-12-11 DOI: 10.2196/58165
Qing Han

Background: Academic and educational institutions are making significant contributions toward training health informatics professionals. As research in health informatics education (HIE) continues to grow, it is useful to have a clearer understanding of this research field.

Objective: This study aims to comprehensively explore the research topics and trends of HIE from 2014 to 2023. Specifically, it aims to explore (1) the trends of annual articles, (2) the prolific countries/regions, institutions, and publication sources, (3) the scientific collaborations of countries/regions and institutions, and (4) the major research themes and their developmental tendencies.

Methods: Using publications in Web of Science Core Collection, a scientometric analysis of 575 articles related to the field of HIE was conducted. The structural topic model was used to identify topics discussed in the literature and to reveal the topic structure and evolutionary trends of HIE research.

Results: Research interest in HIE has clearly increased from 2014 to 2023, and is continually expanding. The United States was found to be the most prolific country in this field. Harvard University was found to be the leading institution with the highest publication productivity. Journal of Medical Internet Research, Journal of The American Medical Informatics Association, and Applied Clinical Informatics were the top 3 journals with the highest articles in this field. Countries/regions and institutions having higher levels of international collaboration were more impactful. Research on HIE could be modeled into 7 topics related to the following areas: clinical (130/575, 22.6%), mobile application (123/575, 21.4%), consumer (99/575, 17.2%), teaching (61/575, 10.6%), public health (56/575, 9.7%), discipline (55/575, 9.6%), and nursing (51/575, 8.9%). The results clearly indicate the unique foci for each year, depicting the process of development for health informatics research.

Conclusions: This is believed to be the first scientometric analysis exploring the research topics and trends in HIE. This study provides useful insights and implications, and the findings could be used as a guide for HIE contributors.

背景:学术和教育机构在培养卫生信息学专业人员方面做出了重大贡献。随着健康信息教育(HIE)研究的不断发展,对这一研究领域有一个更清晰的认识是有益的。目的:本研究旨在全面探讨2014 - 2023年HIE的研究课题及趋势。具体而言,它旨在探索(1)年度文章趋势,(2)高产国家/地区、机构和出版来源,(3)国家/地区和机构的科学合作情况,(4)主要研究主题及其发展趋势。方法:利用Web of Science核心馆藏的出版物,对与HIE领域相关的575篇文献进行科学计量学分析。结构主题模型用于识别文献中讨论的主题,揭示HIE研究的主题结构和演变趋势。结果:从2014年到2023年,HIE的研究兴趣明显增加,并不断扩大。美国被认为是这一领域最多产的国家。哈佛大学被发现是出版效率最高的领先机构。《医学互联网研究杂志》、《美国医学信息学协会杂志》和《应用临床信息学》是该领域文章数量最多的前3大期刊。国际合作水平较高的国家/地区和机构更具影响力。HIE研究可分为7个主题:临床(130/575,22.6%)、移动应用(123/575,21.4%)、消费者(99/575,17.2%)、教学(61/575,10.6%)、公共卫生(56/575,9.7%)、学科(55/575,9.6%)和护理(51/575,8.9%)。结果清楚地显示了每年的独特重点,描绘了卫生信息学研究的发展过程。结论:这是首次对HIE的研究主题和趋势进行科学计量分析。本研究提供了有用的见解和启示,研究结果可作为HIE贡献者的指南。
{"title":"Topics and Trends of Health Informatics Education Research: Scientometric Analysis.","authors":"Qing Han","doi":"10.2196/58165","DOIUrl":"10.2196/58165","url":null,"abstract":"<p><strong>Background: </strong>Academic and educational institutions are making significant contributions toward training health informatics professionals. As research in health informatics education (HIE) continues to grow, it is useful to have a clearer understanding of this research field.</p><p><strong>Objective: </strong>This study aims to comprehensively explore the research topics and trends of HIE from 2014 to 2023. Specifically, it aims to explore (1) the trends of annual articles, (2) the prolific countries/regions, institutions, and publication sources, (3) the scientific collaborations of countries/regions and institutions, and (4) the major research themes and their developmental tendencies.</p><p><strong>Methods: </strong>Using publications in Web of Science Core Collection, a scientometric analysis of 575 articles related to the field of HIE was conducted. The structural topic model was used to identify topics discussed in the literature and to reveal the topic structure and evolutionary trends of HIE research.</p><p><strong>Results: </strong>Research interest in HIE has clearly increased from 2014 to 2023, and is continually expanding. The United States was found to be the most prolific country in this field. Harvard University was found to be the leading institution with the highest publication productivity. Journal of Medical Internet Research, Journal of The American Medical Informatics Association, and Applied Clinical Informatics were the top 3 journals with the highest articles in this field. Countries/regions and institutions having higher levels of international collaboration were more impactful. Research on HIE could be modeled into 7 topics related to the following areas: clinical (130/575, 22.6%), mobile application (123/575, 21.4%), consumer (99/575, 17.2%), teaching (61/575, 10.6%), public health (56/575, 9.7%), discipline (55/575, 9.6%), and nursing (51/575, 8.9%). The results clearly indicate the unique foci for each year, depicting the process of development for health informatics research.</p><p><strong>Conclusions: </strong>This is believed to be the first scientometric analysis exploring the research topics and trends in HIE. This study provides useful insights and implications, and the findings could be used as a guide for HIE contributors.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e58165"},"PeriodicalIF":3.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11669873/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT May Improve Access to Language-Concordant Care for Patients With Non-English Language Preferences. ChatGPT 可改善非英语语言偏好患者获得语言协调护理的机会。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-12-10 DOI: 10.2196/51435
Fiatsogbe Dzuali, Kira Seiger, Roberto Novoa, Maria Aleshin, Joyce Teng, Jenna Lester, Roxana Daneshjou

Unlabelled: This study evaluated the accuracy of ChatGPT in translating English patient education materials into Spanish, Mandarin, and Russian. While ChatGPT shows promise for translating Spanish and Russian medical information, Mandarin translations require further refinement, highlighting the need for careful review of AI-generated translations before clinical use.

未标记:本研究评估ChatGPT将英语患者教育材料翻译成西班牙语、普通话和俄语的准确性。虽然ChatGPT有望翻译西班牙语和俄语医疗信息,但中文翻译需要进一步完善,这突出了在临床使用之前仔细审查人工智能生成的翻译的必要性。
{"title":"ChatGPT May Improve Access to Language-Concordant Care for Patients With Non-English Language Preferences.","authors":"Fiatsogbe Dzuali, Kira Seiger, Roberto Novoa, Maria Aleshin, Joyce Teng, Jenna Lester, Roxana Daneshjou","doi":"10.2196/51435","DOIUrl":"10.2196/51435","url":null,"abstract":"<p><strong>Unlabelled: </strong>This study evaluated the accuracy of ChatGPT in translating English patient education materials into Spanish, Mandarin, and Russian. While ChatGPT shows promise for translating Spanish and Russian medical information, Mandarin translations require further refinement, highlighting the need for careful review of AI-generated translations before clinical use.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e51435"},"PeriodicalIF":3.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11651640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142829563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of a Computer-Based Morphological Analysis Method for Free-Text Responses in the General Medicine In-Training Examination: Algorithm Validation Study. 全科医学在职考试中基于计算机的自由文本响应形态分析方法的评价:算法验证研究。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-12-05 DOI: 10.2196/52068
Daiki Yokokawa, Kiyoshi Shikino, Yuji Nishizaki, Sho Fukui, Yasuharu Tokuda

Background: The General Medicine In-Training Examination (GM-ITE) tests clinical knowledge in a 2-year postgraduate residency program in Japan. In the academic year 2021, as a domain of medical safety, the GM-ITE included questions regarding the diagnosis from medical history and physical findings through video viewing and the skills in presenting a case. Examinees watched a video or audio recording of a patient examination and provided free-text responses. However, the human cost of scoring free-text answers may limit the implementation of GM-ITE. A simple morphological analysis and word-matching model, thus, can be used to score free-text responses.

Objective: This study aimed to compare human versus computer scoring of free-text responses and qualitatively evaluate the discrepancies between human- and machine-generated scores to assess the efficacy of machine scoring.

Methods: After obtaining consent for participation in the study, the authors used text data from residents who voluntarily answered the GM-ITE patient reproduction video-based questions involving simulated patients. The GM-ITE used video-based questions to simulate a patient's consultation in the emergency room with a diagnosis of pulmonary embolism following a fracture. Residents provided statements for the case presentation. We obtained human-generated scores by collating the results of 2 independent scorers and machine-generated scores by converting the free-text responses into a word sequence through segmentation and morphological analysis and matching them with a prepared list of correct answers in 2022.

Results: Of the 104 responses collected-63 for postgraduate year 1 and 41 for postgraduate year 2-39 cases remained for final analysis after excluding invalid responses. The authors found discrepancies between human and machine scoring in 14 questions (7.2%); some were due to shortcomings in machine scoring that could be resolved by maintaining a list of correct words and dictionaries, whereas others were due to human error.

Conclusions: Machine scoring is comparable to human scoring. It requires a simple program and calibration but can potentially reduce the cost of scoring free-text responses.

背景:在日本,全科医学培训考试(GM-ITE)测试为期两年的研究生住院医师项目的临床知识。在2021学年,作为医疗安全的一个领域,GM-ITE包括有关通过视频观看从病史和身体检查结果进行诊断的问题,以及介绍病例的技能。受试者观看患者检查的视频或录音,并提供自由文本回答。然而,为自由文本答案打分的人力成本可能会限制GM-ITE的实施。因此,一个简单的词形分析和单词匹配模型可以用来对自由文本响应进行评分。目的:本研究旨在比较人类和计算机对自由文本回答的评分,并定性地评估人类和机器生成的评分之间的差异,以评估机器评分的有效性。方法:在获得参与研究的同意后,作者使用居民自愿回答GM-ITE患者再现视频问题的文本数据,这些问题涉及模拟患者。GM-ITE使用基于视频的问题来模拟骨折后诊断为肺栓塞的患者在急诊室的咨询。居民们为案件陈述提供了陈述。我们将2个独立评分者的结果与机器生成的分数进行比对,通过分词和形态分析将自由文本回答转换成单词序列,并与事先准备好的2022年正确答案列表进行匹配,得到人工生成的分数。结果:在收集到的104份回复中(研究生一年级63份,研究生二年级41份),剔除无效回复后,还剩下39份用于最终分析。作者发现,人类和机器在14个问题上的得分存在差异(7.2%);一些是由于机器评分的缺点,可以通过维护正确的单词和字典列表来解决,而另一些是由于人为错误。结论:机器评分与人类评分相当。它需要一个简单的程序和校准,但可以潜在地降低对自由文本响应评分的成本。
{"title":"Evaluation of a Computer-Based Morphological Analysis Method for Free-Text Responses in the General Medicine In-Training Examination: Algorithm Validation Study.","authors":"Daiki Yokokawa, Kiyoshi Shikino, Yuji Nishizaki, Sho Fukui, Yasuharu Tokuda","doi":"10.2196/52068","DOIUrl":"10.2196/52068","url":null,"abstract":"<p><strong>Background: </strong>The General Medicine In-Training Examination (GM-ITE) tests clinical knowledge in a 2-year postgraduate residency program in Japan. In the academic year 2021, as a domain of medical safety, the GM-ITE included questions regarding the diagnosis from medical history and physical findings through video viewing and the skills in presenting a case. Examinees watched a video or audio recording of a patient examination and provided free-text responses. However, the human cost of scoring free-text answers may limit the implementation of GM-ITE. A simple morphological analysis and word-matching model, thus, can be used to score free-text responses.</p><p><strong>Objective: </strong>This study aimed to compare human versus computer scoring of free-text responses and qualitatively evaluate the discrepancies between human- and machine-generated scores to assess the efficacy of machine scoring.</p><p><strong>Methods: </strong>After obtaining consent for participation in the study, the authors used text data from residents who voluntarily answered the GM-ITE patient reproduction video-based questions involving simulated patients. The GM-ITE used video-based questions to simulate a patient's consultation in the emergency room with a diagnosis of pulmonary embolism following a fracture. Residents provided statements for the case presentation. We obtained human-generated scores by collating the results of 2 independent scorers and machine-generated scores by converting the free-text responses into a word sequence through segmentation and morphological analysis and matching them with a prepared list of correct answers in 2022.</p><p><strong>Results: </strong>Of the 104 responses collected-63 for postgraduate year 1 and 41 for postgraduate year 2-39 cases remained for final analysis after excluding invalid responses. The authors found discrepancies between human and machine scoring in 14 questions (7.2%); some were due to shortcomings in machine scoring that could be resolved by maintaining a list of correct words and dictionaries, whereas others were due to human error.</p><p><strong>Conclusions: </strong>Machine scoring is comparable to human scoring. It requires a simple program and calibration but can potentially reduce the cost of scoring free-text responses.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e52068"},"PeriodicalIF":3.2,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637224/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of GPT-3.5 and GPT-4 on the Korean Pharmacist Licensing Examination: Comparison Study. GPT-3.5与GPT-4在韩国药师资格考试中的表现:比较研究
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-12-04 DOI: 10.2196/57451
Hye Kyung Jin, EunYoung Kim

Background: ChatGPT, a recently developed artificial intelligence chatbot and a notable large language model, has demonstrated improved performance on medical field examinations. However, there is currently little research on its efficacy in languages other than English or in pharmacy-related examinations.

Objective: This study aimed to evaluate the performance of GPT models on the Korean Pharmacist Licensing Examination (KPLE).

Methods: We evaluated the percentage of correct answers provided by 2 different versions of ChatGPT (GPT-3.5 and GPT-4) for all multiple-choice single-answer KPLE questions, excluding image-based questions. In total, 320, 317, and 323 questions from the 2021, 2022, and 2023 KPLEs, respectively, were included in the final analysis, which consisted of 4 units: Biopharmacy, Industrial Pharmacy, Clinical and Practical Pharmacy, and Medical Health Legislation.

Results: The 3-year average percentage of correct answers was 86.5% (830/960) for GPT-4 and 60.7% (583/960) for GPT-3.5. GPT model accuracy was highest in Biopharmacy (GPT-3.5 77/96, 80.2% in 2022; GPT-4 87/90, 96.7% in 2021) and lowest in Medical Health Legislation (GPT-3.5 8/20, 40% in 2022; GPT-4 12/20, 60% in 2022). Additionally, when comparing the performance of artificial intelligence with that of human participants, pharmacy students outperformed GPT-3.5 but not GPT-4.

Conclusions: In the last 3 years, GPT models have performed very close to or exceeded the passing threshold for the KPLE. This study demonstrates the potential of large language models in the pharmacy domain; however, extensive research is needed to evaluate their reliability and ensure their secure application in pharmacy contexts due to several inherent challenges. Addressing these limitations could make GPT models more effective auxiliary tools for pharmacy education.

背景:ChatGPT是最近发展起来的人工智能聊天机器人,也是一个著名的大型语言模型,在医学现场检查中表现出了提高的性能。然而,目前对其在英语以外的语言或药学相关考试中的有效性的研究很少。目的:评价GPT模型在韩国药师资格考试(KPLE)中的表现。方法:我们评估了两个不同版本的ChatGPT (GPT-3.5和GPT-4)对所有选择单答案的KPLE问题提供的正确答案百分比,不包括基于图像的问题。最终分析的问题分别来自2021年、2022年和2023年的kple,题目分别为320、317和323个,包括4个单元:生物药剂学、工业药剂学、临床与实用药剂学和医疗卫生立法。结果:GPT-4和GPT-3.5的3年平均正确率分别为86.5%(830/960)和60.7%(583/960)。GPT模型准确率最高的是生物药剂学(GPT-3.5 77/96, 2022年为80.2%;GPT-4为87/90,2021年为96.7%),医疗卫生立法最低(GPT-3.5为8/20,2022年为40%;GPT-4 12/20, 2022年60%)。此外,当将人工智能的表现与人类参与者的表现进行比较时,药学专业学生的表现优于GPT-3.5,而不是GPT-4。结论:在过去3年中,GPT模型的表现非常接近或超过了KPLE的通过阈值。本研究展示了大型语言模型在药学领域的潜力;然而,由于一些固有的挑战,需要广泛的研究来评估它们的可靠性,并确保它们在药学环境中的安全应用。解决这些局限性可以使GPT模型成为药学教育更有效的辅助工具。
{"title":"Performance of GPT-3.5 and GPT-4 on the Korean Pharmacist Licensing Examination: Comparison Study.","authors":"Hye Kyung Jin, EunYoung Kim","doi":"10.2196/57451","DOIUrl":"10.2196/57451","url":null,"abstract":"<p><strong>Background: </strong>ChatGPT, a recently developed artificial intelligence chatbot and a notable large language model, has demonstrated improved performance on medical field examinations. However, there is currently little research on its efficacy in languages other than English or in pharmacy-related examinations.</p><p><strong>Objective: </strong>This study aimed to evaluate the performance of GPT models on the Korean Pharmacist Licensing Examination (KPLE).</p><p><strong>Methods: </strong>We evaluated the percentage of correct answers provided by 2 different versions of ChatGPT (GPT-3.5 and GPT-4) for all multiple-choice single-answer KPLE questions, excluding image-based questions. In total, 320, 317, and 323 questions from the 2021, 2022, and 2023 KPLEs, respectively, were included in the final analysis, which consisted of 4 units: Biopharmacy, Industrial Pharmacy, Clinical and Practical Pharmacy, and Medical Health Legislation.</p><p><strong>Results: </strong>The 3-year average percentage of correct answers was 86.5% (830/960) for GPT-4 and 60.7% (583/960) for GPT-3.5. GPT model accuracy was highest in Biopharmacy (GPT-3.5 77/96, 80.2% in 2022; GPT-4 87/90, 96.7% in 2021) and lowest in Medical Health Legislation (GPT-3.5 8/20, 40% in 2022; GPT-4 12/20, 60% in 2022). Additionally, when comparing the performance of artificial intelligence with that of human participants, pharmacy students outperformed GPT-3.5 but not GPT-4.</p><p><strong>Conclusions: </strong>In the last 3 years, GPT models have performed very close to or exceeded the passing threshold for the KPLE. This study demonstrates the potential of large language models in the pharmacy domain; however, extensive research is needed to evaluate their reliability and ensure their secure application in pharmacy contexts due to several inherent challenges. Addressing these limitations could make GPT models more effective auxiliary tools for pharmacy education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e57451"},"PeriodicalIF":3.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical Recommendations for Navigating Digital Tools in Hospitals: Qualitative Interview Study. 医院数字化工具导航实用建议:定性访谈研究。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-11-27 DOI: 10.2196/60031
Marie Wosny, Livia Maria Strasser, Simone Kraehenmann, Janna Hastings
<p><strong>Background: </strong>The digitalization of health care organizations is an integral part of a clinician's daily life, making it vital for health care professionals (HCPs) to understand and effectively use digital tools in hospital settings. However, clinicians often express a lack of preparedness for their digital work environments. Particularly, new clinical end users, encompassing medical and nursing students, seasoned professionals transitioning to new health care environments, and experienced practitioners encountering new health care technologies, face critically intense learning periods, often with a lack of adequate time for learning digital tools, resulting in difficulties in integrating and adopting these digital tools into clinical practice.</p><p><strong>Objective: </strong>This study aims to comprehensively collect advice from experienced HCPs in Switzerland to guide new clinical end users on how to initiate their engagement with health ITs within hospital settings.</p><p><strong>Methods: </strong>We conducted qualitative interviews with 52 HCPs across Switzerland, representing 24 medical specialties from 14 hospitals. The interviews were transcribed verbatim and analyzed through inductive thematic analysis. Codes were developed iteratively, and themes and aggregated dimensions were refined through collaborative discussions.</p><p><strong>Results: </strong>Ten themes emerged from the interview data, namely (1) digital tool understanding, (2) peer-based learning strategies, (3) experimental learning approaches, (4) knowledge exchange and support, (5) training approaches, (6) proactive innovation, (7) an adaptive technology mindset, (8) critical thinking approaches, (9) dealing with emotions, and (10) empathy and human factors. Consequently, we devised 10 recommendations with specific advice to new clinical end users on how to approach new health care technologies, encompassing the following: take time to get to know and understand the tools you are working with; proactively ask experienced colleagues; simply try it out and practice; know where to get help and information; take sufficient training; embrace curiosity and pursue innovation; maintain an open and adaptable mindset; keep thinking critically and use your knowledge base; overcome your fears, and never lose the human and patient focus.</p><p><strong>Conclusions: </strong>Our study emphasized the importance of comprehensive training and learning approaches for health care technologies based on the advice and recommendations of experienced HCPs based in Swiss hospitals. Moreover, these recommendations have implications for medical educators and clinical instructors, providing advice on effective methods to instruct and support new end users, enabling them to use novel technologies proficiently. Therefore, we advocate for new clinical end users, health care institutions and clinical instructors, academic institutions and medical educators, and regulatory bodies to prior
背景:医疗机构的数字化是临床医生日常生活中不可或缺的一部分,因此,医疗专业人员(HCP)必须了解并有效使用医院环境中的数字化工具。然而,临床医生往往表示对数字化工作环境缺乏准备。尤其是新的临床终端用户,包括医护学生、过渡到新医疗环境的经验丰富的专业人员,以及遇到新医疗技术的经验丰富的从业人员,他们都面临着非常紧张的学习期,往往缺乏足够的时间学习数字工具,导致在临床实践中整合和采用这些数字工具时遇到困难:本研究旨在全面收集瑞士经验丰富的医疗保健人员的建议,以指导新的临床终端用户如何在医院环境中开始使用医疗信息技术:我们对瑞士 14 家医院 24 个医学专业的 52 名医疗保健人员进行了定性访谈。我们对访谈内容进行了逐字记录,并通过归纳式主题分析法对访谈内容进行了分析。在合作讨论的基础上,对主题和综合维度进行了完善:从访谈数据中得出了 10 个主题,分别是:(1)对数字化工具的理解;(2)基于同伴的学习策略;(3)实验性学习方法;(4)知识交流和支持;(5)培训方法;(6)主动创新;(7)适应性技术思维;(8)批判性思维方法;(9)处理情绪;(10)移情和人为因素。因此,我们为新的临床最终用户如何使用新的医疗保健技术提出了 10 条具体建议,包括:花时间了解和理解你正在使用的工具;主动询问有经验的同事;简单地尝试和练习;知道从哪里获得帮助和信息;参加充分的培训;拥抱好奇心和追求创新;保持开放和适应性的心态;保持批判性思维并使用你的知识库;克服恐惧,永远不要失去对人和病人的关注:我们的研究强调了根据瑞士医院经验丰富的医疗保健人员的意见和建议,对医疗保健技术进行全面培训和学习的重要性。此外,这些建议对医学教育者和临床指导者也有借鉴意义,它们提供了指导和支持新终端用户的有效方法,使他们能够熟练使用新技术。因此,我们倡导新的临床终端用户、医疗机构和临床指导人员、学术机构和医学教育工作者以及监管机构优先考虑有效的培训和培养技术准备,以优化信息技术在医疗保健中的应用。
{"title":"Practical Recommendations for Navigating Digital Tools in Hospitals: Qualitative Interview Study.","authors":"Marie Wosny, Livia Maria Strasser, Simone Kraehenmann, Janna Hastings","doi":"10.2196/60031","DOIUrl":"10.2196/60031","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;The digitalization of health care organizations is an integral part of a clinician's daily life, making it vital for health care professionals (HCPs) to understand and effectively use digital tools in hospital settings. However, clinicians often express a lack of preparedness for their digital work environments. Particularly, new clinical end users, encompassing medical and nursing students, seasoned professionals transitioning to new health care environments, and experienced practitioners encountering new health care technologies, face critically intense learning periods, often with a lack of adequate time for learning digital tools, resulting in difficulties in integrating and adopting these digital tools into clinical practice.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Objective: &lt;/strong&gt;This study aims to comprehensively collect advice from experienced HCPs in Switzerland to guide new clinical end users on how to initiate their engagement with health ITs within hospital settings.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;We conducted qualitative interviews with 52 HCPs across Switzerland, representing 24 medical specialties from 14 hospitals. The interviews were transcribed verbatim and analyzed through inductive thematic analysis. Codes were developed iteratively, and themes and aggregated dimensions were refined through collaborative discussions.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Ten themes emerged from the interview data, namely (1) digital tool understanding, (2) peer-based learning strategies, (3) experimental learning approaches, (4) knowledge exchange and support, (5) training approaches, (6) proactive innovation, (7) an adaptive technology mindset, (8) critical thinking approaches, (9) dealing with emotions, and (10) empathy and human factors. Consequently, we devised 10 recommendations with specific advice to new clinical end users on how to approach new health care technologies, encompassing the following: take time to get to know and understand the tools you are working with; proactively ask experienced colleagues; simply try it out and practice; know where to get help and information; take sufficient training; embrace curiosity and pursue innovation; maintain an open and adaptable mindset; keep thinking critically and use your knowledge base; overcome your fears, and never lose the human and patient focus.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;Our study emphasized the importance of comprehensive training and learning approaches for health care technologies based on the advice and recommendations of experienced HCPs based in Swiss hospitals. Moreover, these recommendations have implications for medical educators and clinical instructors, providing advice on effective methods to instruct and support new end users, enabling them to use novel technologies proficiently. Therefore, we advocate for new clinical end users, health care institutions and clinical instructors, academic institutions and medical educators, and regulatory bodies to prior","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e60031"},"PeriodicalIF":3.2,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11635325/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142733224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Comparison of Junior Residents and ChatGPT in the Objective Structured Clinical Examination (OSCE) for Medical History Taking and Documentation of Medical Records: Development and Usability Study. 初级住院医师与ChatGPT在病历采集与病历记录的客观结构化临床检查(OSCE)中的表现比较:发展与可用性研究。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-11-21 DOI: 10.2196/59902
Ting-Yun Huang, Pei Hsing Hsieh, Yung-Chun Chang

Background: This study explores the cutting-edge abilities of large language models (LLMs) such as ChatGPT in medical history taking and medical record documentation, with a focus on their practical effectiveness in clinical settings-an area vital for the progress of medical artificial intelligence.

Objective: Our aim was to assess the capability of ChatGPT versions 3.5 and 4.0 in performing medical history taking and medical record documentation in simulated clinical environments. The study compared the performance of nonmedical individuals using ChatGPT with that of junior medical residents.

Methods: A simulation involving standardized patients was designed to mimic authentic medical history-taking interactions. Five nonmedical participants used ChatGPT versions 3.5 and 4.0 to conduct medical histories and document medical records, mirroring the tasks performed by 5 junior residents in identical scenarios. A total of 10 diverse scenarios were examined.

Results: Evaluation of the medical documentation created by laypersons with ChatGPT assistance and those created by junior residents was conducted by 2 senior emergency physicians using audio recordings and the final medical records. The assessment used the Objective Structured Clinical Examination benchmarks in Taiwan as a reference. ChatGPT-4.0 exhibited substantial enhancements over its predecessor and met or exceeded the performance of human counterparts in terms of both checklist and global assessment scores. Although the overall quality of human consultations remained higher, ChatGPT-4.0's proficiency in medical documentation was notably promising.

Conclusions: The performance of ChatGPT 4.0 was on par with that of human participants in Objective Structured Clinical Examination evaluations, signifying its potential in medical history and medical record documentation. Despite this, the superiority of human consultations in terms of quality was evident. The study underscores both the promise and the current limitations of LLMs in the realm of clinical practice.

背景:本研究探讨了ChatGPT等大型语言模型(llm)在病史采集和病历记录方面的前沿能力,重点关注它们在临床环境中的实际有效性——这是医疗人工智能进步的关键领域。目的:我们的目的是评估ChatGPT版本3.5和4.0在模拟临床环境中进行病史记录和病历记录的能力。该研究比较了使用ChatGPT的非医疗个体与初级医疗住院医师的表现。方法:设计一个涉及标准化患者的模拟,以模拟真实的病史采集互动。5名非医疗参与者使用ChatGPT版本3.5和4.0来记录病史和记录医疗记录,反映了5名初级住院医生在相同场景下所执行的任务。总共研究了10种不同的情景。结果:2名资深急诊医师利用录音资料和最终病历对ChatGPT辅助下外行和初级住院医师撰写的病历进行评估。本评估以台湾客观结构化临床检查基准为参照。ChatGPT-4.0比其前身有了实质性的增强,在检查表和总体评估分数方面达到或超过了人类同行的表现。虽然人类咨询的总体质量仍然较高,但ChatGPT-4.0在医疗文档方面的熟练程度明显有希望。结论:ChatGPT 4.0在客观结构化临床检查评估中的表现与人类参与者相当,表明其在病史和病历记录方面的潜力。尽管如此,人力协商在质量方面的优势是显而易见的。该研究强调了llm在临床实践领域的前景和当前的局限性。
{"title":"Performance Comparison of Junior Residents and ChatGPT in the Objective Structured Clinical Examination (OSCE) for Medical History Taking and Documentation of Medical Records: Development and Usability Study.","authors":"Ting-Yun Huang, Pei Hsing Hsieh, Yung-Chun Chang","doi":"10.2196/59902","DOIUrl":"10.2196/59902","url":null,"abstract":"<p><strong>Background: </strong>This study explores the cutting-edge abilities of large language models (LLMs) such as ChatGPT in medical history taking and medical record documentation, with a focus on their practical effectiveness in clinical settings-an area vital for the progress of medical artificial intelligence.</p><p><strong>Objective: </strong>Our aim was to assess the capability of ChatGPT versions 3.5 and 4.0 in performing medical history taking and medical record documentation in simulated clinical environments. The study compared the performance of nonmedical individuals using ChatGPT with that of junior medical residents.</p><p><strong>Methods: </strong>A simulation involving standardized patients was designed to mimic authentic medical history-taking interactions. Five nonmedical participants used ChatGPT versions 3.5 and 4.0 to conduct medical histories and document medical records, mirroring the tasks performed by 5 junior residents in identical scenarios. A total of 10 diverse scenarios were examined.</p><p><strong>Results: </strong>Evaluation of the medical documentation created by laypersons with ChatGPT assistance and those created by junior residents was conducted by 2 senior emergency physicians using audio recordings and the final medical records. The assessment used the Objective Structured Clinical Examination benchmarks in Taiwan as a reference. ChatGPT-4.0 exhibited substantial enhancements over its predecessor and met or exceeded the performance of human counterparts in terms of both checklist and global assessment scores. Although the overall quality of human consultations remained higher, ChatGPT-4.0's proficiency in medical documentation was notably promising.</p><p><strong>Conclusions: </strong>The performance of ChatGPT 4.0 was on par with that of human participants in Objective Structured Clinical Examination evaluations, signifying its potential in medical history and medical record documentation. Despite this, the superiority of human consultations in terms of quality was evident. The study underscores both the promise and the current limitations of LLMs in the realm of clinical practice.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e59902"},"PeriodicalIF":3.2,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612517/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Open-Source Large Language Models for Data Augmentation in Hospital Staff Surveys: Mixed Methods Study. 在医院员工调查中利用开源大型语言模型进行数据扩充:混合方法研究。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-11-19 DOI: 10.2196/51433
Carl Ehrett, Sudeep Hegde, Kwame Andre, Dixizi Liu, Timothy Wilson

Background: Generative large language models (LLMs) have the potential to revolutionize medical education by generating tailored learning materials, enhancing teaching efficiency, and improving learner engagement. However, the application of LLMs in health care settings, particularly for augmenting small datasets in text classification tasks, remains underexplored, particularly for cost- and privacy-conscious applications that do not permit the use of third-party services such as OpenAI's ChatGPT.

Objective: This study aims to explore the use of open-source LLMs, such as Large Language Model Meta AI (LLaMA) and Alpaca models, for data augmentation in a specific text classification task related to hospital staff surveys.

Methods: The surveys were designed to elicit narratives of everyday adaptation by frontline radiology staff during the initial phase of the COVID-19 pandemic. A 2-step process of data augmentation and text classification was conducted. The study generated synthetic data similar to the survey reports using 4 generative LLMs for data augmentation. A different set of 3 classifier LLMs was then used to classify the augmented text for thematic categories. The study evaluated performance on the classification task.

Results: The overall best-performing combination of LLMs, temperature, classifier, and number of synthetic data cases is via augmentation with LLaMA 7B at temperature 0.7 with 100 augments, using Robustly Optimized BERT Pretraining Approach (RoBERTa) for the classification task, achieving an average area under the receiver operating characteristic (AUC) curve of 0.87 (SD 0.02; ie, 1 SD). The results demonstrate that open-source LLMs can enhance text classifiers' performance for small datasets in health care contexts, providing promising pathways for improving medical education processes and patient care practices.

Conclusions: The study demonstrates the value of data augmentation with open-source LLMs, highlights the importance of privacy and ethical considerations when using LLMs, and suggests future directions for research in this field.

背景:生成式大型语言模型(LLMs)可生成量身定制的学习材料、提高教学效率并改善学习者的参与度,从而有望彻底改变医学教育。然而,LLMs 在医疗环境中的应用,尤其是在文本分类任务中用于扩充小型数据集的应用,仍未得到充分探索,特别是在成本和隐私意识较高的应用中,因为这些应用不允许使用第三方服务,如 OpenAI 的 ChatGPT:本研究旨在探索开源 LLM(如大型语言模型元人工智能(LLaMA)和 Alpaca 模型)在与医院员工调查相关的特定文本分类任务中的数据增强应用:调查旨在了解一线放射科工作人员在 COVID-19 大流行初期的日常适应情况。研究采用了数据扩充和文本分类两个步骤。研究使用 4 个生成式 LLM 生成与调查报告类似的合成数据,用于数据扩增。然后使用一组不同的 3 个分类器 LLM 对增强文本进行主题分类。研究评估了分类任务的性能:LLMs、温度、分类器和合成数据个数的最佳组合是在温度为 0.7 的条件下使用 LLaMA 7B 进行扩增,并使用稳健优化的 BERT 预训练方法 (RoBERTa) 进行 100 个扩增,从而完成分类任务,其接收者操作特征曲线下的平均面积 (AUC) 为 0.87(SD 0.02;即 1 SD)。研究结果表明,开源 LLM 可以提高医疗保健领域小型数据集文本分类器的性能,为改善医学教育流程和患者护理实践提供了前景广阔的途径:本研究证明了使用开源 LLMs 增强数据的价值,强调了使用 LLMs 时隐私和伦理考虑的重要性,并提出了该领域未来的研究方向。
{"title":"Leveraging Open-Source Large Language Models for Data Augmentation in Hospital Staff Surveys: Mixed Methods Study.","authors":"Carl Ehrett, Sudeep Hegde, Kwame Andre, Dixizi Liu, Timothy Wilson","doi":"10.2196/51433","DOIUrl":"10.2196/51433","url":null,"abstract":"<p><strong>Background: </strong>Generative large language models (LLMs) have the potential to revolutionize medical education by generating tailored learning materials, enhancing teaching efficiency, and improving learner engagement. However, the application of LLMs in health care settings, particularly for augmenting small datasets in text classification tasks, remains underexplored, particularly for cost- and privacy-conscious applications that do not permit the use of third-party services such as OpenAI's ChatGPT.</p><p><strong>Objective: </strong>This study aims to explore the use of open-source LLMs, such as Large Language Model Meta AI (LLaMA) and Alpaca models, for data augmentation in a specific text classification task related to hospital staff surveys.</p><p><strong>Methods: </strong>The surveys were designed to elicit narratives of everyday adaptation by frontline radiology staff during the initial phase of the COVID-19 pandemic. A 2-step process of data augmentation and text classification was conducted. The study generated synthetic data similar to the survey reports using 4 generative LLMs for data augmentation. A different set of 3 classifier LLMs was then used to classify the augmented text for thematic categories. The study evaluated performance on the classification task.</p><p><strong>Results: </strong>The overall best-performing combination of LLMs, temperature, classifier, and number of synthetic data cases is via augmentation with LLaMA 7B at temperature 0.7 with 100 augments, using Robustly Optimized BERT Pretraining Approach (RoBERTa) for the classification task, achieving an average area under the receiver operating characteristic (AUC) curve of 0.87 (SD 0.02; ie, 1 SD). The results demonstrate that open-source LLMs can enhance text classifiers' performance for small datasets in health care contexts, providing promising pathways for improving medical education processes and patient care practices.</p><p><strong>Conclusions: </strong>The study demonstrates the value of data augmentation with open-source LLMs, highlights the importance of privacy and ethical considerations when using LLMs, and suggests future directions for research in this field.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e51433"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11590755/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Reality Simulation in Undergraduate Health Care Education Programs: Usability Study. 本科医疗保健教育课程中的虚拟现实模拟:可用性研究
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-11-19 DOI: 10.2196/56844
Gry Mørk, Tore Bonsaksen, Ole Sønnik Larsen, Hans Martin Kunnikoff, Silje Stangeland Lie

Background: Virtual reality (VR) is increasingly being used in higher education for clinical skills training and role-playing among health care students. Using 360° videos in VR headsets, followed by peer debrief and group discussions, may strengthen students' social and emotional learning.

Objective: This study aimed to explore student-perceived usability of VR simulation in three health care education programs in Norway.

Methods: Students from one university participated in a VR simulation program. Of these, students in social education (n=74), nursing (n=45), and occupational therapy (n=27) completed a questionnaire asking about their perceptions of the usability of the VR simulation and the related learning activities. Differences between groups of students were examined with Pearson chi-square tests and with 1-way ANOVA. Qualitative content analysis was used to analyze data from open-ended questions.

Results: The nursing students were most satisfied with the usability of the VR simulation, while the occupational therapy students were least satisfied. The nursing students had more often prior experience from using VR technology (60%), while occupational therapy students less often had prior experience (37%). Nevertheless, high mean scores indicated that the students experienced the VR simulation and the related learning activities as very useful. The results also showed that by using realistic scenarios in VR simulation, health care students can be prepared for complex clinical situations in a safe environment. Also, group debriefing sessions are a vital part of the learning process that enhance active involvement with peers.

Conclusions: VR simulation has promise and potential as a pedagogical tool in health care education, especially for training soft skills relevant for clinical practice, such as communication, decision-making, time management, and critical thinking.

背景:虚拟现实(VR)越来越多地应用于高等教育中的临床技能培训和医护学生的角色扮演。在 VR 头显中使用 360° 视频,然后进行同伴汇报和小组讨论,可以加强学生的社会和情感学习:本研究旨在探讨挪威三个医疗保健教育项目中学生对 VR 模拟可用性的看法:方法:一所大学的学生参加了VR模拟项目。其中,社会教育专业(74人)、护理专业(45人)和职业治疗专业(27人)的学生填写了一份调查问卷,询问他们对VR模拟和相关学习活动可用性的看法。采用皮尔逊卡方检验和单因素方差分析来检验学生组间的差异。定性内容分析法用于分析开放式问题中的数据:结果:护理专业学生对 VR 模拟的可用性最为满意,而职业治疗专业学生的满意度最低。护理专业的学生更经常使用 VR 技术(60%),而职业治疗专业的学生则较少使用(37%)。尽管如此,高平均分表明学生们认为 VR 模拟和相关学习活动非常有用。结果还显示,通过在 VR 模拟中使用逼真的场景,医护学生可以在安全的环境中为复杂的临床情况做好准备。此外,小组汇报环节也是学习过程中的重要组成部分,能增强学生与同伴的积极参与:VR模拟作为医疗保健教育的一种教学工具,特别是在培训与临床实践相关的软技能(如沟通、决策、时间管理和批判性思维)方面,具有广阔的前景和潜力。
{"title":"Virtual Reality Simulation in Undergraduate Health Care Education Programs: Usability Study.","authors":"Gry Mørk, Tore Bonsaksen, Ole Sønnik Larsen, Hans Martin Kunnikoff, Silje Stangeland Lie","doi":"10.2196/56844","DOIUrl":"10.2196/56844","url":null,"abstract":"<p><strong>Background: </strong>Virtual reality (VR) is increasingly being used in higher education for clinical skills training and role-playing among health care students. Using 360° videos in VR headsets, followed by peer debrief and group discussions, may strengthen students' social and emotional learning.</p><p><strong>Objective: </strong>This study aimed to explore student-perceived usability of VR simulation in three health care education programs in Norway.</p><p><strong>Methods: </strong>Students from one university participated in a VR simulation program. Of these, students in social education (n=74), nursing (n=45), and occupational therapy (n=27) completed a questionnaire asking about their perceptions of the usability of the VR simulation and the related learning activities. Differences between groups of students were examined with Pearson chi-square tests and with 1-way ANOVA. Qualitative content analysis was used to analyze data from open-ended questions.</p><p><strong>Results: </strong>The nursing students were most satisfied with the usability of the VR simulation, while the occupational therapy students were least satisfied. The nursing students had more often prior experience from using VR technology (60%), while occupational therapy students less often had prior experience (37%). Nevertheless, high mean scores indicated that the students experienced the VR simulation and the related learning activities as very useful. The results also showed that by using realistic scenarios in VR simulation, health care students can be prepared for complex clinical situations in a safe environment. Also, group debriefing sessions are a vital part of the learning process that enhance active involvement with peers.</p><p><strong>Conclusions: </strong>VR simulation has promise and potential as a pedagogical tool in health care education, especially for training soft skills relevant for clinical practice, such as communication, decision-making, time management, and critical thinking.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e56844"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11615562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using ChatGPT in Nursing: Scoping Review of Current Opinions. 在护理中使用ChatGPT:当前意见的范围审查。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-11-19 DOI: 10.2196/54297
You Zhou, Si-Jia Li, Xing-Yi Tang, Yi-Chen He, Hao-Ming Ma, Ao-Qi Wang, Run-Yuan Pei, Mei-Hua Piao

Background: Since the release of ChatGPT in November 2022, this emerging technology has garnered a lot of attention in various fields, and nursing is no exception. However, to date, no study has comprehensively summarized the status and opinions of using ChatGPT across different nursing fields.

Objective: We aim to synthesize the status and opinions of using ChatGPT according to different nursing fields, as well as assess ChatGPT's strengths, weaknesses, and the potential impacts it may cause.

Methods: This scoping review was conducted following the framework of Arksey and O'Malley and guided by the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews). A comprehensive literature research was conducted in 4 web-based databases (PubMed, Embase, Web of Science, and CINHAL) to identify studies reporting the opinions of using ChatGPT in nursing fields from 2022 to September 3, 2023. The references of the included studies were screened manually to further identify relevant studies. Two authors conducted studies screening, eligibility assessments, and data extraction independently.

Results: A total of 30 studies were included. The United States (7 studies), Canada (5 studies), and China (4 studies) were countries with the most publications. In terms of fields of concern, studies mainly focused on "ChatGPT and nursing education" (20 studies), "ChatGPT and nursing practice" (10 studies), and "ChatGPT and nursing research, writing, and examination" (6 studies). Six studies addressed the use of ChatGPT in multiple nursing fields.

Conclusions: As an emerging artificial intelligence technology, ChatGPT has great potential to revolutionize nursing education, nursing practice, and nursing research. However, researchers, institutions, and administrations still need to critically examine its accuracy, safety, and privacy, as well as academic misconduct and potential ethical issues that it may lead to before applying ChatGPT to practice.

背景:自2022年11月ChatGPT发布以来,这项新兴技术在各个领域获得了大量关注,护理也不例外。然而,到目前为止,还没有研究全面总结了不同护理领域使用ChatGPT的现状和观点。目的:针对不同的护理领域,综合运用ChatGPT的现状和意见,评估ChatGPT的优势、劣势及可能产生的影响。方法:本范围评价遵循Arksey和O'Malley的框架,并以PRISMA-ScR(系统评价优选报告项目和范围评价扩展元分析)为指导。对4个基于网络的数据库(PubMed、Embase、Web of Science和CINHAL)进行了全面的文献研究,以确定2022年至2023年9月3日期间在护理领域使用ChatGPT的研究报告。人工筛选纳入研究的参考文献,进一步识别相关研究。两位作者独立进行了研究筛选、资格评估和数据提取。结果:共纳入30项研究。发表论文最多的国家是美国(7篇)、加拿大(5篇)和中国(4篇)。从关注领域来看,研究主要集中在“ChatGPT与护理教育”(20篇)、“ChatGPT与护理实践”(10篇)、“ChatGPT与护理研究、写作、考试”(6篇)。六项研究探讨了ChatGPT在多个护理领域的应用。结论:ChatGPT作为一种新兴的人工智能技术,在护理教育、护理实践和护理研究领域具有巨大的变革潜力。然而,在ChatGPT应用于实践之前,研究人员、机构和管理部门仍然需要严格检查其准确性、安全性和隐私性,以及可能导致的学术不端行为和潜在的伦理问题。
{"title":"Using ChatGPT in Nursing: Scoping Review of Current Opinions.","authors":"You Zhou, Si-Jia Li, Xing-Yi Tang, Yi-Chen He, Hao-Ming Ma, Ao-Qi Wang, Run-Yuan Pei, Mei-Hua Piao","doi":"10.2196/54297","DOIUrl":"10.2196/54297","url":null,"abstract":"<p><strong>Background: </strong>Since the release of ChatGPT in November 2022, this emerging technology has garnered a lot of attention in various fields, and nursing is no exception. However, to date, no study has comprehensively summarized the status and opinions of using ChatGPT across different nursing fields.</p><p><strong>Objective: </strong>We aim to synthesize the status and opinions of using ChatGPT according to different nursing fields, as well as assess ChatGPT's strengths, weaknesses, and the potential impacts it may cause.</p><p><strong>Methods: </strong>This scoping review was conducted following the framework of Arksey and O'Malley and guided by the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews). A comprehensive literature research was conducted in 4 web-based databases (PubMed, Embase, Web of Science, and CINHAL) to identify studies reporting the opinions of using ChatGPT in nursing fields from 2022 to September 3, 2023. The references of the included studies were screened manually to further identify relevant studies. Two authors conducted studies screening, eligibility assessments, and data extraction independently.</p><p><strong>Results: </strong>A total of 30 studies were included. The United States (7 studies), Canada (5 studies), and China (4 studies) were countries with the most publications. In terms of fields of concern, studies mainly focused on \"ChatGPT and nursing education\" (20 studies), \"ChatGPT and nursing practice\" (10 studies), and \"ChatGPT and nursing research, writing, and examination\" (6 studies). Six studies addressed the use of ChatGPT in multiple nursing fields.</p><p><strong>Conclusions: </strong>As an emerging artificial intelligence technology, ChatGPT has great potential to revolutionize nursing education, nursing practice, and nursing research. However, researchers, institutions, and administrations still need to critically examine its accuracy, safety, and privacy, as well as academic misconduct and potential ethical issues that it may lead to before applying ChatGPT to practice.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e54297"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Psychological Safety Competency Training During the Clinical Internship From the Perspective of Health Care Trainee Mentors in 11 Pan-European Countries: Mixed Methods Observational Study. 更正:从 11 个泛欧国家医疗保健实习生导师的角度看临床实习期间的心理安全能力培训:混合方法观察研究。
IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-11-15 DOI: 10.2196/68503
Irene Carrillo, Ivana Skoumalová, Ireen Bruus, Victoria Klemm, Sofia Guerra-Paiva, Bojana Knežević, Augustina Jankauskiene, Dragana Jocic, Susanna Tella, Sandra C Buttigieg, Einav Srulovici, Andrea Madarasová Gecková, Kaja Põlluste, Reinhard Strametz, Paulo Sousa, Marina Odalovic, José Joaquín Mira

[This corrects the article DOI: 10.2196/64125.].

[此处更正了文章 DOI:10.2196/64125]。
{"title":"Correction: Psychological Safety Competency Training During the Clinical Internship From the Perspective of Health Care Trainee Mentors in 11 Pan-European Countries: Mixed Methods Observational Study.","authors":"Irene Carrillo, Ivana Skoumalová, Ireen Bruus, Victoria Klemm, Sofia Guerra-Paiva, Bojana Knežević, Augustina Jankauskiene, Dragana Jocic, Susanna Tella, Sandra C Buttigieg, Einav Srulovici, Andrea Madarasová Gecková, Kaja Põlluste, Reinhard Strametz, Paulo Sousa, Marina Odalovic, José Joaquín Mira","doi":"10.2196/68503","DOIUrl":"10.2196/68503","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.2196/64125.].</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e68503"},"PeriodicalIF":3.2,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11632886/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142639999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
JMIR Medical Education
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1