Peter E. D. Love;Jane Matthews;Weili Fang;Stuart Porter;Hanbin Luo;Lieyun Ding
{"title":"Learning to Comprehend and Trust Artificial Intelligence Outcomes: A Conceptual Explainable AI Evaluation Framework","authors":"Peter E. D. Love;Jane Matthews;Weili Fang;Stuart Porter;Hanbin Luo;Lieyun Ding","doi":"10.1109/EMR.2023.3342200","DOIUrl":null,"url":null,"abstract":"Explainable artificial intelligence (XAI) is a burgeoning concept. It is gaining prominence as an approach to better understand how artificial intelligence solutions' outputs can improve decision making. Evaluation frameworks to enable organizations to understand XAIs what, why, how, and when are yet to be developed. Thus, we aim to fill this void by developing a conceptual \n<italic>content</i>\n, \n<italic>context</i>\n, \n<italic>process,</i>\n and \n<italic>outcome</i>\n (CCPO) evaluation framework to justify XAIs adoption and effective management using construction organizations as a backdrop for the article's setting. After introducing and describing the proposed novel CCPO framework for operationalizing XAI, we discuss its implications for future research. The contributions of our article are twofold: First, it highlights the need for organizations to embrace and enact XAI so that decision makers and stakeholders can better understand \n<italic>why</i>\n and \n<italic>how</i>\n a specific prediction materializes; and second, it provides a frame of reference for organizations to realize the business value and benefits of XAI.","PeriodicalId":35585,"journal":{"name":"IEEE Engineering Management Review","volume":"52 1","pages":"230-247"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Engineering Management Review","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10366794/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Business, Management and Accounting","Score":null,"Total":0}
引用次数: 0
Abstract
Explainable artificial intelligence (XAI) is a burgeoning concept. It is gaining prominence as an approach to better understand how artificial intelligence solutions' outputs can improve decision making. Evaluation frameworks to enable organizations to understand XAIs what, why, how, and when are yet to be developed. Thus, we aim to fill this void by developing a conceptual
content
,
context
,
process,
and
outcome
(CCPO) evaluation framework to justify XAIs adoption and effective management using construction organizations as a backdrop for the article's setting. After introducing and describing the proposed novel CCPO framework for operationalizing XAI, we discuss its implications for future research. The contributions of our article are twofold: First, it highlights the need for organizations to embrace and enact XAI so that decision makers and stakeholders can better understand
why
and
how
a specific prediction materializes; and second, it provides a frame of reference for organizations to realize the business value and benefits of XAI.
期刊介绍:
Reprints articles from other publications of significant interest to members. The papers are aimed at those engaged in managing research, development, or engineering activities. Reprints make it possible for the readers to receive the best of today"s literature without having to subscribe to and read other periodicals.