POSSIBLE EVALUATION OF THE CORRECTNESS OF EXPLANATIONS TO THE END USER IN AN ARTIFICIAL INTELLIGENCE SYSTEM

S. Chalyi, V. Leshchynskyi
{"title":"POSSIBLE EVALUATION OF THE CORRECTNESS OF EXPLANATIONS TO THE END USER IN AN ARTIFICIAL INTELLIGENCE SYSTEM","authors":"S. Chalyi, V. Leshchynskyi","doi":"10.20998/2522-9052.2023.4.10","DOIUrl":null,"url":null,"abstract":"The subject of this paper is the process of evaluation of explanations in an artificial intelligence system. The aim is to develop a method for forming a possible evaluation of the correctness of explanations for the end user in an artificial intelligence system. The evaluation of the correctness of explanations makes it possible to increase the user's confidence in the solution of an artificial intelligence system and, as a result, to create conditions for the effective use of this solution. Aims: to structure explanations according to the user's needs; to develop an indicator of the correctness of explanations using the theory of possibilities; to develop a method for evaluating the correctness of explanations using the possibilities approach. The approaches used are a set-theoretic approach to describe the elements of explanations in an artificial intelligence system; a possibility approach to provide a representation of the criterion for evaluating explanations in an intelligent system; a probabilistic approach to describe the probabilistic component of the evaluation of explanations. The following results are obtained. The explanations are structured according to the needs of the user. It is shown that the explanation of the decision process is used by specialists in the development of intelligent systems. Such an explanation represents a complete or partial sequence of steps to derive a decision in an artificial intelligence system. End users mostly use explanations of the result presented by an intelligent system. Such explanations usually define the relationship between the values of input variables and the resulting prediction. The article discusses the requirements for evaluating explanations, considering the needs of internal and external users of an artificial intelligence system. It is shown that it is advisable to use explanation fidelity evaluation for specialists in the development of such systems, and explanation correctness evaluation for external users. An explanation correctness assessment is proposed that uses the necessity indicator in the theory of possibilities. A method for evaluation of explanation fidelity is developed. Conclusions. The scientific novelty of the obtained results is as follows. A possible method for assessing the correctness of an explanation in an artificial intelligence system using the indicators of possibility and necessity is proposed. The method calculates the necessity of using the target value of the input variable in the explanation, taking into account the possibility of choosing alternative values of the variables, which makes it possible to ensure that the target value of the input variable is necessary for the explanation and that the explanation is correct.","PeriodicalId":275587,"journal":{"name":"Advanced Information Systems","volume":"54 25","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20998/2522-9052.2023.4.10","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The subject of this paper is the process of evaluation of explanations in an artificial intelligence system. The aim is to develop a method for forming a possible evaluation of the correctness of explanations for the end user in an artificial intelligence system. The evaluation of the correctness of explanations makes it possible to increase the user's confidence in the solution of an artificial intelligence system and, as a result, to create conditions for the effective use of this solution. Aims: to structure explanations according to the user's needs; to develop an indicator of the correctness of explanations using the theory of possibilities; to develop a method for evaluating the correctness of explanations using the possibilities approach. The approaches used are a set-theoretic approach to describe the elements of explanations in an artificial intelligence system; a possibility approach to provide a representation of the criterion for evaluating explanations in an intelligent system; a probabilistic approach to describe the probabilistic component of the evaluation of explanations. The following results are obtained. The explanations are structured according to the needs of the user. It is shown that the explanation of the decision process is used by specialists in the development of intelligent systems. Such an explanation represents a complete or partial sequence of steps to derive a decision in an artificial intelligence system. End users mostly use explanations of the result presented by an intelligent system. Such explanations usually define the relationship between the values of input variables and the resulting prediction. The article discusses the requirements for evaluating explanations, considering the needs of internal and external users of an artificial intelligence system. It is shown that it is advisable to use explanation fidelity evaluation for specialists in the development of such systems, and explanation correctness evaluation for external users. An explanation correctness assessment is proposed that uses the necessity indicator in the theory of possibilities. A method for evaluation of explanation fidelity is developed. Conclusions. The scientific novelty of the obtained results is as follows. A possible method for assessing the correctness of an explanation in an artificial intelligence system using the indicators of possibility and necessity is proposed. The method calculates the necessity of using the target value of the input variable in the explanation, taking into account the possibility of choosing alternative values of the variables, which makes it possible to ensure that the target value of the input variable is necessary for the explanation and that the explanation is correct.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
对人工智能系统中向最终用户所作解释的正确性进行可能的评估
本文的主题是人工智能系统对解释的评价过程。目的是开发一种方法,形成对人工智能系统中最终用户解释正确性的可能评估。对解释的正确性进行评估,可以增加用户对人工智能系统解决方案的信心,从而为有效使用该解决方案创造条件。目的:根据用户的需要组织解释;运用可能性理论提出一种解释正确性的指标;开发一种使用可能性方法评估解释正确性的方法。使用的方法是集合论方法来描述人工智能系统中的解释元素;提供智能系统中评价解释的标准表示的可能性方法;一种描述解释评估的概率成分的概率方法。得到如下结果:这些解释是根据用户的需要组织的。结果表明,决策过程的解释被智能系统开发中的专家所使用。这样的解释代表了在人工智能系统中导出决策的完整或部分步骤序列。最终用户大多使用智能系统提供的结果解释。这种解释通常定义了输入变量的值与结果预测之间的关系。本文讨论了评估解释的要求,考虑了人工智能系统的内部和外部用户的需求。结果表明,在开发此类系统时,对专家使用解释保真度评估,对外部用户使用解释正确性评估是可取的。利用可能性理论中的必要性指标,提出了一种解释正确性评价方法。提出了一种评价解释保真度的方法。结论。所得结果的科学新颖性如下。提出了一种利用可能性和必要性指标来评估人工智能系统中解释正确性的可能方法。该方法计算了在解释中使用输入变量目标值的必要性,同时考虑了变量选择备选值的可能性,从而可以保证输入变量的目标值是解释所必需的,说明是正确的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
MEDOIDS AS A PACKING OF ORB IMAGE DESCRIPTORS THE METHOD OF RANKING EFFECTIVE PROJECT SOLUTIONS IN CONDITIONS OF INCOMPLETE CERTAINTY ENSURING THE FUNCTIONAL STABILITY OF THE INFORMATION SYSTEM OF THE POWER PLANT ON THE BASIS OF MONITORING THE PARAMETERS OF THE WORKING CONDITION OF COMPUTER DEVICES COMPARATIVE ANALYSIS OF SPECTRAL ANOMALIES DETECTION METHODS ON IMAGES FROM ON-BOARD REMOTE SENSING SYSTEMS FPGA-BASED IMPLEMENTATION OF A GAUSSIAN SMOOTHING FILTER WITH POWERS-OF-TWO COEFFICIENTS
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1