Meysam Rabiee , Mohsen Mirhashemi , Michael S. Pangburn , Saeed Piri , Dursun Delen
{"title":"Towards explainable artificial intelligence through expert-augmented supervised feature selection","authors":"Meysam Rabiee , Mohsen Mirhashemi , Michael S. Pangburn , Saeed Piri , Dursun Delen","doi":"10.1016/j.dss.2024.114214","DOIUrl":null,"url":null,"abstract":"<div><p>This paper presents a comprehensive framework for expert-augmented supervised feature selection, addressing pre-processing, in-processing, and post-processing aspects of Explainable Artificial Intelligence (XAI). As part of pre-processing XAI, we introduce the Probabilistic Solution Generator through the Information Fusion (PSGIF) algorithm, leveraging ensemble techniques to enhance the exploration and exploitation capabilities of a Genetic Algorithm (GA). Balancing explainability and prediction accuracy, we formulate two multi-objective optimization models that empower expert(s) to specify a maximum acceptable sacrifice percentage. This approach enhances explainability by reducing the number of selected features and prioritizing those considered more relevant from the domain expert's perspective. This contribution aligns with in-processing XAI, incorporating expert opinions into the feature selection process as a multi-objective problem. Traditional feature selection techniques lack the capability to efficiently search the solution space considering our explainability-focused objective function. To overcome this, we leverage the Genetic Algorithm (GA), a powerful metaheuristic algorithm, optimizing its parameters through Bayesian optimization. For post-processing XAI, we present the Posterior Ensemble Algorithm (PEA), estimating the predictive power of features. PEA enables a nuanced comparison between objective and subjective importance, identifying features as underrated, overrated, or appropriately rated. We evaluate the performance of our proposed GAs on 16 publicly available datasets, focusing on prediction accuracy in a single objective setting. Moreover, we test our multi-objective model on a classification dataset to show the applicability and effectiveness of our framework. Overall, this paper provides a holistic and nuanced approach to explainable feature selection, offering decision-makers a comprehensive understanding of feature importance.</p></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"181 ","pages":"Article 114214"},"PeriodicalIF":6.7000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Support Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167923624000472","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a comprehensive framework for expert-augmented supervised feature selection, addressing pre-processing, in-processing, and post-processing aspects of Explainable Artificial Intelligence (XAI). As part of pre-processing XAI, we introduce the Probabilistic Solution Generator through the Information Fusion (PSGIF) algorithm, leveraging ensemble techniques to enhance the exploration and exploitation capabilities of a Genetic Algorithm (GA). Balancing explainability and prediction accuracy, we formulate two multi-objective optimization models that empower expert(s) to specify a maximum acceptable sacrifice percentage. This approach enhances explainability by reducing the number of selected features and prioritizing those considered more relevant from the domain expert's perspective. This contribution aligns with in-processing XAI, incorporating expert opinions into the feature selection process as a multi-objective problem. Traditional feature selection techniques lack the capability to efficiently search the solution space considering our explainability-focused objective function. To overcome this, we leverage the Genetic Algorithm (GA), a powerful metaheuristic algorithm, optimizing its parameters through Bayesian optimization. For post-processing XAI, we present the Posterior Ensemble Algorithm (PEA), estimating the predictive power of features. PEA enables a nuanced comparison between objective and subjective importance, identifying features as underrated, overrated, or appropriately rated. We evaluate the performance of our proposed GAs on 16 publicly available datasets, focusing on prediction accuracy in a single objective setting. Moreover, we test our multi-objective model on a classification dataset to show the applicability and effectiveness of our framework. Overall, this paper provides a holistic and nuanced approach to explainable feature selection, offering decision-makers a comprehensive understanding of feature importance.
期刊介绍:
The common thread of articles published in Decision Support Systems is their relevance to theoretical and technical issues in the support of enhanced decision making. The areas addressed may include foundations, functionality, interfaces, implementation, impacts, and evaluation of decision support systems (DSSs).