Zhe Ji, Yuliang Jiang, Haitao Sun, Bin Qiu, Yi Chen, Mao Li, Jinghong Fan, Junjie Wang
{"title":"Enhancing puncture skills training with generative AI and digital technologies: a parallel cohort study.","authors":"Zhe Ji, Yuliang Jiang, Haitao Sun, Bin Qiu, Yi Chen, Mao Li, Jinghong Fan, Junjie Wang","doi":"10.1186/s12909-024-06217-0","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Traditional puncture skills training for refresher doctors faces limitations in effectiveness and efficiency. This study explored the application of generative AI (ChatGPT), templates, and digital imaging to enhance puncture skills training.</p><p><strong>Methods: </strong>90 refresher doctors were enrolled sequentially into 3 groups: traditional training; template and digital imaging training; and ChatGPT, template and digital imaging training. Outcomes included theoretical knowledge, technical skills, and trainee satisfaction measured at baseline, post-training, and 3-month follow-up.</p><p><strong>Results: </strong>The ChatGPT group increased theoretical knowledge scores by 17-21% over traditional training at post-training (81.6 ± 4.56 vs. 69.6 ± 4.58, p < 0.001) and follow-up (86.5 ± 4.08 vs. 71.3 ± 4.83, p < 0.001). It also outperformed template training by 4-5% at post-training (81.6 ± 4.56 vs. 78.5 ± 4.65, p = 0.032) and follow-up (86.5 ± 4.08 vs. 82.7 ± 4.68, p = 0.004). For technical skills, the ChatGPT (4.0 ± 0.32) and template (4.0 ± 0.18) groups showed similar scores at post-training, outperforming traditional training (3.6 ± 0.50) by 11% (p < 0.001). At follow-up, ChatGPT (4.0 ± 0.18) and template (4.0 ± 0.32) still exceeded traditional training (3.8 ± 0.43) by 5% (p = 0.071, p = 0.026). Learning curve analysis revealed fastest knowledge (slope 13.02) and skill (slope 0.62) acquisition for ChatGPT group over template (slope 11.28, 0.38) and traditional (slope 5.17, 0.53). ChatGPT responses showed 100% relevance, 50% completeness, 60% accuracy, with 15.9 s response time. For training satisfaction, ChatGPT group had highest scores (4.2 ± 0.73), over template (3.8 ± 0.68) and traditional groups (2.6 ± 0.94) (p < 0.01).</p><p><strong>Conclusion: </strong>Integrating AI, templates and digital imaging significantly improved puncture knowledge and skills over traditional training. Combining technological innovations and AI shows promise for streamlining complex medical competency mastery.</p>","PeriodicalId":51234,"journal":{"name":"BMC Medical Education","volume":"24 1","pages":"1328"},"PeriodicalIF":2.7000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11575025/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Education","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12909-024-06217-0","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Traditional puncture skills training for refresher doctors faces limitations in effectiveness and efficiency. This study explored the application of generative AI (ChatGPT), templates, and digital imaging to enhance puncture skills training.
Methods: 90 refresher doctors were enrolled sequentially into 3 groups: traditional training; template and digital imaging training; and ChatGPT, template and digital imaging training. Outcomes included theoretical knowledge, technical skills, and trainee satisfaction measured at baseline, post-training, and 3-month follow-up.
Results: The ChatGPT group increased theoretical knowledge scores by 17-21% over traditional training at post-training (81.6 ± 4.56 vs. 69.6 ± 4.58, p < 0.001) and follow-up (86.5 ± 4.08 vs. 71.3 ± 4.83, p < 0.001). It also outperformed template training by 4-5% at post-training (81.6 ± 4.56 vs. 78.5 ± 4.65, p = 0.032) and follow-up (86.5 ± 4.08 vs. 82.7 ± 4.68, p = 0.004). For technical skills, the ChatGPT (4.0 ± 0.32) and template (4.0 ± 0.18) groups showed similar scores at post-training, outperforming traditional training (3.6 ± 0.50) by 11% (p < 0.001). At follow-up, ChatGPT (4.0 ± 0.18) and template (4.0 ± 0.32) still exceeded traditional training (3.8 ± 0.43) by 5% (p = 0.071, p = 0.026). Learning curve analysis revealed fastest knowledge (slope 13.02) and skill (slope 0.62) acquisition for ChatGPT group over template (slope 11.28, 0.38) and traditional (slope 5.17, 0.53). ChatGPT responses showed 100% relevance, 50% completeness, 60% accuracy, with 15.9 s response time. For training satisfaction, ChatGPT group had highest scores (4.2 ± 0.73), over template (3.8 ± 0.68) and traditional groups (2.6 ± 0.94) (p < 0.01).
Conclusion: Integrating AI, templates and digital imaging significantly improved puncture knowledge and skills over traditional training. Combining technological innovations and AI shows promise for streamlining complex medical competency mastery.
期刊介绍:
BMC Medical Education is an open access journal publishing original peer-reviewed research articles in relation to the training of healthcare professionals, including undergraduate, postgraduate, and continuing education. The journal has a special focus on curriculum development, evaluations of performance, assessment of training needs and evidence-based medicine.