Soumyadeb Chowdhury, Sian Joel-Edgar, P. Dey, S. Bhattacharya, Alexander Kharlamov
{"title":"Embedding transparency in artificial intelligence machine learning models: managerial implications on predicting and explaining employee turnover","authors":"Soumyadeb Chowdhury, Sian Joel-Edgar, P. Dey, S. Bhattacharya, Alexander Kharlamov","doi":"10.1080/09585192.2022.2066981","DOIUrl":null,"url":null,"abstract":"Abstract Employee turnover (ET) is a major issue faced by firms in all business sectors. Artificial intelligence (AI) machine learning (ML) prediction models can help to classify the likelihood of employees voluntarily departing from employment using historical employee datasets. However, output responses generated by these AI-based ML models lack transparency and interpretability, making it difficult for HR managers to understand the rationale behind the AI predictions. If managers do not understand how and why responses are generated by AI models based on the input datasets, it is unlikely to augment data-driven decision-making and bring value to the organisations. The main purpose of this article is to demonstrate the capability of Local Interpretable Model-Agnostic Explanations (LIME) technique to intuitively explain the ET predictions generated by AI-based ML models for a given employee dataset to HR managers. From a theoretical perspective, we contribute to the International Human Resource Management literature by presenting a conceptual review of AI algorithmic transparency and then discussing its significance to sustain competitive advantage by using the principles of resource-based view theory. We also offer a transparent AI implementation framework using LIME which will provide a useful guide for HR managers to increase the explainability of the AI-based ML models, and therefore mitigate trust issues in data-driven decision-making.","PeriodicalId":22502,"journal":{"name":"The International Journal of Human Resource Management","volume":"40 1","pages":"2732 - 2764"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Journal of Human Resource Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/09585192.2022.2066981","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Abstract Employee turnover (ET) is a major issue faced by firms in all business sectors. Artificial intelligence (AI) machine learning (ML) prediction models can help to classify the likelihood of employees voluntarily departing from employment using historical employee datasets. However, output responses generated by these AI-based ML models lack transparency and interpretability, making it difficult for HR managers to understand the rationale behind the AI predictions. If managers do not understand how and why responses are generated by AI models based on the input datasets, it is unlikely to augment data-driven decision-making and bring value to the organisations. The main purpose of this article is to demonstrate the capability of Local Interpretable Model-Agnostic Explanations (LIME) technique to intuitively explain the ET predictions generated by AI-based ML models for a given employee dataset to HR managers. From a theoretical perspective, we contribute to the International Human Resource Management literature by presenting a conceptual review of AI algorithmic transparency and then discussing its significance to sustain competitive advantage by using the principles of resource-based view theory. We also offer a transparent AI implementation framework using LIME which will provide a useful guide for HR managers to increase the explainability of the AI-based ML models, and therefore mitigate trust issues in data-driven decision-making.