Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations

M. Nizami, Muhammad Yaseen Khan, A. Bogliolo
{"title":"Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations","authors":"M. Nizami, Muhammad Yaseen Khan, A. Bogliolo","doi":"10.1109/MAJICC56935.2022.9994203","DOIUrl":null,"url":null,"abstract":"Explainable Artificial Intelligence (XAI) has recently gained a swell of interest, as many Artificial Intelligence (AI) practitioners and developers are compelled to rationalize how such AI-based systems work. Decades back, most XAI systems were developed as knowledge-based or expert systems. These systems assumed reasoning for the technical description of an explanation, with little regard for the user's cognitive capabilities. The emphasis of XAI research appears to have turned to a more pragmatic approaches of explanation for a better understanding. An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback, which are essential for XAI system evaluation. To this end, we propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding. In this regard, we adopt Bloom's taxonomy, a widely accepted model for assessing the users' cognitive capability. We utilize the counterfactual explanations as an explanation-providing medium encompassed with user feedback to validate the levels of understanding about the explanation at each cognitive level and improvise the explanation generation methods accordingly.","PeriodicalId":205027,"journal":{"name":"2022 Mohammad Ali Jinnah University International Conference on Computing (MAJICC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Mohammad Ali Jinnah University International Conference on Computing (MAJICC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MAJICC56935.2022.9994203","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Explainable Artificial Intelligence (XAI) has recently gained a swell of interest, as many Artificial Intelligence (AI) practitioners and developers are compelled to rationalize how such AI-based systems work. Decades back, most XAI systems were developed as knowledge-based or expert systems. These systems assumed reasoning for the technical description of an explanation, with little regard for the user's cognitive capabilities. The emphasis of XAI research appears to have turned to a more pragmatic approaches of explanation for a better understanding. An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback, which are essential for XAI system evaluation. To this end, we propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding. In this regard, we adopt Bloom's taxonomy, a widely accepted model for assessing the users' cognitive capability. We utilize the counterfactual explanations as an explanation-providing medium encompassed with user feedback to validate the levels of understanding about the explanation at each cognitive level and improvise the explanation generation methods accordingly.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于人类认知层次的反事实解释实验设计
可解释的人工智能(XAI)最近引起了人们的极大兴趣,因为许多人工智能(AI)实践者和开发人员不得不对这种基于AI的系统的工作方式进行合理化。几十年前,大多数XAI系统都是基于知识或专家系统开发的。这些系统假定对解释的技术描述进行推理,很少考虑用户的认知能力。为了更好地理解,XAI研究的重点似乎转向了一种更实用的解释方法。认知科学研究可能实质性影响XAI进步的一个广泛领域是评估用户知识和反馈,这对XAI系统评估至关重要。为此,我们提出了一个框架,在不同的认知理解水平的基础上进行实验,以产生和评估解释。在这方面,我们采用Bloom的分类法,这是一个被广泛接受的评估用户认知能力的模型。我们利用反事实解释作为一种包含用户反馈的解释提供媒介,以验证每个认知层面对解释的理解水平,并相应地改进解释生成方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Feature Selection via GM-CPSO and Binary Conversion: Analyses on a Binary-Class Dataset Integrating Blockchain with IoT for Mitigating Cyber Threat In Corporate Environment Evaluating Automatic CV Shortlisting Tool For Job Recruitment Based On Machine Learning Techniques Proteins Classification Using An Improve Darknet-53 Deep Learning Model Heart Failure Prediction Using Machine learning Approaches
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1