Background: Digital competence is listed as one of the key competences for lifelong learning and is increasing in importance not only in private life but also in professional life. There is consensus within the health care sector that digital competence (or digital literacy) is needed in various professional fields. However, it is still unclear what exactly the digital competence of health professionals should include and how it can be measured.
Objective: This scoping review aims to provide an overview of the common definitions of digital literacy in scientific literature in the field of health care and the existing measurement instruments.
Methods: Peer-reviewed scientific papers from the last 10 years (2013-2023) in English or German that deal with the digital competence of health care workers in both outpatient and inpatient care were included. The databases ScienceDirect, Scopus, PubMed, EBSCOhost, MEDLINE, OpenAIRE, ERIC, OAIster, Cochrane Library, CAMbase, APA PsycNet, and Psyndex were searched for literature. The review follows the JBI methodology for scoping reviews, and the description of the results is based on the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist.
Results: The initial search identified 1682 papers, of which 46 (2.73%) were included in the synthesis. The review results show that there is a strong focus on technical skills and knowledge with regard to both the definitions of digital competence and the measurement tools. A wide range of competences were identified within the analyzed works and integrated into a validated competence model in the areas of technical, methodological, social, and personal competences. The measurement instruments mainly used self-assessment of skills and knowledge as an indicator of competence and differed greatly in their statistical quality.
Conclusions: The identified multitude of subcompetences illustrates the complexity of digital competence in health care, and existing measuring instruments are not yet able to reflect this complexity.
Background: Artificial intelligence models can learn from medical literature and clinical cases and generate answers that rival human experts. However, challenges remain in the analysis of complex data containing images and diagrams.
Objective: This study aims to assess the answering capabilities and accuracy of ChatGPT-4 Vision (GPT-4V) for a set of 100 questions, including image-based questions, from the 2023 otolaryngology board certification examination.
Methods: Answers to 100 questions from the 2023 otolaryngology board certification examination, including image-based questions, were generated using GPT-4V. The accuracy rate was evaluated using different prompts, and the presence of images, clinical area of the questions, and variations in the answer content were examined.
Results: The accuracy rate for text-only input was, on average, 24.7% but improved to 47.3% with the addition of English translation and prompts (P<.001). The average nonresponse rate for text-only input was 46.3%; this decreased to 2.7% with the addition of English translation and prompts (P<.001). The accuracy rate was lower for image-based questions than for text-only questions across all types of input, with a relatively high nonresponse rate. General questions and questions from the fields of head and neck allergies and nasal allergies had relatively high accuracy rates, which increased with the addition of translation and prompts. In terms of content, questions related to anatomy had the highest accuracy rate. For all content types, the addition of translation and prompts increased the accuracy rate. As for the performance based on image-based questions, the average of correct answer rate with text-only input was 30.4%, and that with text-plus-image input was 41.3% (P=.02).
Conclusions: Examination of artificial intelligence's answering capabilities for the otolaryngology board certification examination improves our understanding of its potential and limitations in this field. Although the improvement was noted with the addition of translation and prompts, the accuracy rate for image-based questions was lower than that for text-based questions, suggesting room for improvement in GPT-4V at this stage. Furthermore, text-plus-image input answers a higher rate in image-based questions. Our findings imply the usefulness and potential of GPT-4V in medicine; however, future consideration of safe use methods is needed.
Background: The COVID-19 pandemic has highlighted the growing relevance of telehealth in health care. Assessing health care and nursing students' telehealth competencies is crucial for its successful integration into education and practice.
Objective: We aimed to assess students' perceived telehealth knowledge, skills, attitudes, and experiences. In addition, we aimed to examine students' preferences for telehealth content and teaching methods within their curricula.
Methods: We conducted a cross-sectional web-based study in May 2022. A project-specific questionnaire, developed and refined through iterative feedback and face-validity testing, addressed topics such as demographics, personal perceptions, and professional experience with telehealth and solicited input on potential telehealth course content. Statistical analyses were conducted on surveys with at least a 50% completion rate, including descriptive statistics of categorical variables, graphical representation of results, and Kruskal Wallis tests for central tendencies in subgroup analyses.
Results: A total of 261 students from 7 bachelor's and 4 master's health care and nursing programs participated in the study. Most students expressed interest in telehealth (180/261, 69% very or rather interested) and recognized its importance in their education (215/261, 82.4% very or rather important). However, most participants reported limited knowledge of telehealth applications concerning their profession (only 7/261, 2.7% stated profound knowledge) and limited active telehealth experience with various telehealth applications (between 18/261, 6.9% and 63/261, 24.1%). Statistically significant differences were found between study programs regarding telehealth interest (P=.005), knowledge (P<.001), perceived importance in education (P<.001), and perceived relevance after the pandemic (P=.004). Practical training with devices, software, and apps and telehealth case examples with various patient groups were perceived as most important for integration in future curricula. Most students preferred both interdisciplinary and program-specific courses.
Conclusions: This study emphasizes the need to integrate telehealth into health care education curricula, as students state positive telehealth attitudes but seem to be not adequately prepared for its implementation. To optimally prepare future health professionals for the increasing role of telehealth in practice, the results of this study can be considered when designing telehealth curricula.
Background: Previous research applying large language models (LLMs) to medicine was focused on text-based information. Recently, multimodal variants of LLMs acquired the capability of recognizing images.
Objective: We aim to evaluate the image recognition capability of generative pretrained transformer (GPT)-4V, a recent multimodal LLM developed by OpenAI, in the medical field by testing how visual information affects its performance to answer questions in the 117th Japanese National Medical Licensing Examination.
Methods: We focused on 108 questions that had 1 or more images as part of a question and presented GPT-4V with the same questions under two conditions: (1) with both the question text and associated images and (2) with the question text only. We then compared the difference in accuracy between the 2 conditions using the exact McNemar test.
Results: Among the 108 questions with images, GPT-4V's accuracy was 68% (73/108) when presented with images and 72% (78/108) when presented without images (P=.36). For the 2 question categories, clinical and general, the accuracies with and those without images were 71% (70/98) versus 78% (76/98; P=.21) and 30% (3/10) versus 20% (2/10; P≥.99), respectively.
Conclusions: The additional information from the images did not significantly improve the performance of GPT-4V in the Japanese National Medical Licensing Examination.
Background: Medical students in Japan undergo a 2-year postgraduate residency program to acquire clinical knowledge and general medical skills. The General Medicine In-Training Examination (GM-ITE) assesses postgraduate residents' clinical knowledge. A clinical simulation video (CSV) may assess learners' interpersonal abilities.
Objective: This study aimed to evaluate the relationship between GM-ITE scores and resident physicians' diagnostic skills by having them watch a CSV and to explore resident physicians' perceptions of the CSV's realism, educational value, and impact on their motivation to learn.
Methods: The participants included 56 postgraduate medical residents who took the GM-ITE between January 21 and January 28, 2021; watched the CSV; and then provided a diagnosis. The CSV and GM-ITE scores were compared, and the validity of the simulations was examined using discrimination indices, wherein ≥0.20 indicated high discriminatory power and >0.40 indicated a very good measure of the subject's qualifications. Additionally, we administered an anonymous questionnaire to ascertain participants' views on the realism and educational value of the CSV and its impact on their motivation to learn.
Results: Of the 56 participants, 6 (11%) provided the correct diagnosis, and all were from the second postgraduate year. All domains indicated high discriminatory power. The (anonymous) follow-up responses indicated that the CSV format was more suitable than the conventional GM-ITE for assessing clinical competence. The anonymous survey revealed that 12 (52%) participants found the CSV format more suitable than the GM-ITE for assessing clinical competence, 18 (78%) affirmed the realism of the video simulation, and 17 (74%) indicated that the experience increased their motivation to learn.
Conclusions: The findings indicated that CSV modules simulating real-world clinical examinations were successful in assessing examinees' clinical competence across multiple domains. The study demonstrated that the CSV not only augmented the assessment of diagnostic skills but also positively impacted learners' motivation, suggesting a multifaceted role for simulation in medical education.
The communication gap between patients and health care professionals has led to increased disputes and resource waste in the medical domain. The development of artificial intelligence and other technologies brings new possibilities to solve this problem. This viewpoint paper proposes a new relationship between patients and health care professionals-"shared decision-making"-allowing both sides to obtain a deeper understanding of the disease and reach a consensus during diagnosis and treatment. Then, this paper discusses the important impact of ChatGPT-like solutions in treating rheumatoid arthritis using methotrexate from clinical and patient perspectives. For clinical professionals, ChatGPT-like solutions could provide support in disease diagnosis, treatment, and clinical trials, but attention should be paid to privacy, confidentiality, and regulatory norms. For patients, ChatGPT-like solutions allow easy access to massive amounts of information; however, the information should be carefully managed to ensure safe and effective care. To ensure the effective application of ChatGPT-like solutions in improving the relationship between patients and health care professionals, it is essential to establish a comprehensive database and provide legal, ethical, and other support. Above all, ChatGPT-like solutions could benefit patients and health care professionals if they ensure evidence-based solutions and data protection and collaborate with regulatory authorities and regulatory evolution.
[This corrects the article DOI: 10.2196/45312.].