Aim
To synthesize literature on algorithmic bias and transparency in artificial intelligence tools used in nursing education and identify common applications, bias types, transparency challenges and mitigation strategies.
Background
Artificial intelligence is increasingly embedded in nursing education through tutoring systems, virtual simulations, predictive models, chatbots and automated grading. These tools enhance personalization and efficiency but may introduce bias and opacity that threaten fairness, learning outcomes and student trust.
Design
Scoping review.
Methods
PubMed, CINAHL, Scopus, IEEE Xplore, ACM Digital Library and grey literature were searched for sources published January 2015–April 2025. Studies were included if they examined artificial intelligence in nursing education and addressed bias, fairness, or transparency. Data were synthesized narratively and organized into thematic categories.
Results
Thirty-five studies met the inclusion criteria. Reported tools included tutoring systems, virtual patients, chatbots, predictive analytics and grading technologies. Reported biases were linked to non-representative datasets, narrow scenario design and evaluation criteria that may disadvantage students by race and ethnicity, gender, language background, or learning preferences. Transparency challenges involved proprietary or complex models, limited disclosure of training data and decision rules and a lack of built-in explainability features. Ethical concerns included reduced student autonomy, unequal outcomes across learner groups and diminished trust. Mitigation strategies included more inclusive data selection, integration of explainability features, algorithm audits and stronger institutional oversight.
Conclusions
Bias and transparency limitations in artificial intelligence tools pose ethical challenges in nursing. Inclusive design, system transparency and collaboration among educators, developers and institutions are essential for value aligned integration.
扫码关注我们
求助内容:
应助结果提醒方式:
