评估可解释人工智能的模型要求:一个模板和示范案例研究。

IF 1.6 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Artificial Life Pub Date : 2023-11-01 DOI:10.1162/artl_a_00414
Michael Heider;Helena Stegherr;Richard Nordsieck;Jörg Hähner
{"title":"评估可解释人工智能的模型要求:一个模板和示范案例研究。","authors":"Michael Heider;Helena Stegherr;Richard Nordsieck;Jörg Hähner","doi":"10.1162/artl_a_00414","DOIUrl":null,"url":null,"abstract":"In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach’s use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 4","pages":"468-486"},"PeriodicalIF":1.6000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508338","citationCount":"0","resultStr":"{\"title\":\"Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study\",\"authors\":\"Michael Heider;Helena Stegherr;Richard Nordsieck;Jörg Hähner\",\"doi\":\"10.1162/artl_a_00414\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach’s use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.\",\"PeriodicalId\":55574,\"journal\":{\"name\":\"Artificial Life\",\"volume\":\"29 4\",\"pages\":\"468-486\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2023-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508338\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Life\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10508338/\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Life","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10508338/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在社会技术环境中,人类操作者越来越多地得到决策支持系统的帮助。通过采用此类系统,社会技术系统的重要特性(如自适应和自优化)有望得到进一步改善。为了让操作人员接受并有效参与,决策支持系统必须能够解释具体决策背后的原因。在本文中,我们提出使用学习分类器系统(LCSs)这一系列基于规则的机器学习方法,来促进和强调提高决策透明度的技术。此外,我们还提出了一种新方法,用于评估特定应用的可解释性需求,以设计 LCS 模型。为此,我们提出了与应用无关的七个问题模板。我们在一项基于访谈的制造业案例研究中演示了该方法的使用。我们发现,所收到的答案确实能为精心设计的 LCS 模型提供有用的见解,并能满足利益相关者与智能代理积极互动的要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study
In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach’s use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Life
Artificial Life 工程技术-计算机:理论方法
CiteScore
4.70
自引率
7.70%
发文量
38
审稿时长
>12 weeks
期刊介绍: Artificial Life, launched in the fall of 1993, has become the unifying forum for the exchange of scientific information on the study of artificial systems that exhibit the behavioral characteristics of natural living systems, through the synthesis or simulation using computational (software), robotic (hardware), and/or physicochemical (wetware) means. Each issue features cutting-edge research on artificial life that advances the state-of-the-art of our knowledge about various aspects of living systems such as: Artificial chemistry and the origins of life Self-assembly, growth, and development Self-replication and self-repair Systems and synthetic biology Perception, cognition, and behavior Embodiment and enactivism Collective behaviors of swarms Evolutionary and ecological dynamics Open-endedness and creativity Social organization and cultural evolution Societal and technological implications Philosophy and aesthetics Applications to biology, medicine, business, education, or entertainment.
期刊最新文献
Continuous Evolution in the NK Treadmill Model. Guideless Artificial Life Model for Reproduction, Development, and Interactions. Modeling the Mutation and Competition of Certain Nutrient-Producing Protocells by Means of Specific Turing Machines. Complexity, Artificial Life, and Artificial Intelligence. Neurons as Autoencoders.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1