An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence

IF 15.5 1区 管理学 Q1 BUSINESS Journal of Innovation & Knowledge Pub Date : 2025-05-01 Epub Date: 2025-04-16 DOI:10.1016/j.jik.2025.100700
Meenu Chaudhary , Loveleen Gaur , Amlan Chakrabarti , Gurmeet Singh , Paul Jones , Sascha Kraus
{"title":"An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence","authors":"Meenu Chaudhary ,&nbsp;Loveleen Gaur ,&nbsp;Amlan Chakrabarti ,&nbsp;Gurmeet Singh ,&nbsp;Paul Jones ,&nbsp;Sascha Kraus","doi":"10.1016/j.jik.2025.100700","DOIUrl":null,"url":null,"abstract":"<div><div>Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.</div></div>","PeriodicalId":46792,"journal":{"name":"Journal of Innovation & Knowledge","volume":"10 3","pages":"Article 100700"},"PeriodicalIF":15.5000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Innovation & Knowledge","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2444569X25000502","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/16 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

Abstract

Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用可解释的人工智能评估预测员工流失透明度的集成模型
最近的研究重点是利用机器学习(ML)算法预测员工流失(ECn),以避免可能出现的经济损失、技术泄露以及客户和知识转移。然而,人力资源专业人员能否依赖算法进行预测?他们能否在预测过程未知的情况下做出决定?由于缺乏可解释性,ML 模型的排他性和日益增长的复杂性使得现场专家在理解这些多方面的黑盒子时面临挑战。为了解决黑箱预测的可解释性、可信度和透明度问题,本研究探索了可解释人工智能(XAI)在识别 ECn 升级因素方面的应用,分析了其对生产率、员工士气和财务稳定性的负面影响。我们提出了一个预测模型,根据性能指标对表现最好的两种算法进行比较。此后,我们建议在数据集上应用基于夏普利值的可解释人工智能,即夏普利加法前计划方法(SHAP),来识别和比较表现最佳算法逻辑回归和随机森林分析的特征重要性。预测结果的可解释性解除了预测的束缚,增强了信任度并促进了保留策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
16.10
自引率
12.70%
发文量
118
审稿时长
37 days
期刊介绍: The Journal of Innovation and Knowledge (JIK) explores how innovation drives knowledge creation and vice versa, emphasizing that not all innovation leads to knowledge, but enduring innovation across diverse fields fosters theory and knowledge. JIK invites papers on innovations enhancing or generating knowledge, covering innovation processes, structures, outcomes, and behaviors at various levels. Articles in JIK examine knowledge-related changes promoting innovation for societal best practices. JIK serves as a platform for high-quality studies undergoing double-blind peer review, ensuring global dissemination to scholars, practitioners, and policymakers who recognize innovation and knowledge as economic drivers. It publishes theoretical articles, empirical studies, case studies, reviews, and other content, addressing current trends and emerging topics in innovation and knowledge. The journal welcomes suggestions for special issues and encourages articles to showcase contextual differences and lessons for a broad audience. In essence, JIK is an interdisciplinary journal dedicated to advancing theoretical and practical innovations and knowledge across multiple fields, including Economics, Business and Management, Engineering, Science, and Education.
期刊最新文献
How younger employees’ gratitude expression impacts older employees’ knowledge sharing: An emotions-as-social-information perspective The positive impact of transformational leadership on digital transformation and innovation in agri-food SMEs Mechanisms of green finance enhancing urban ecological resilience: Leveraging green innovation diffusion and digital financial network Open data as knowledge infrastructure for enabling corporate green transformation Artificial intelligence, dynamic capabilities, and innovation resilience: The contingent roles of market competition intensity and technological turbulence
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1