Michael Heider;Helena Stegherr;Richard Nordsieck;Jörg Hähner
{"title":"评估可解释人工智能的模型要求:一个模板和示范案例研究。","authors":"Michael Heider;Helena Stegherr;Richard Nordsieck;Jörg Hähner","doi":"10.1162/artl_a_00414","DOIUrl":null,"url":null,"abstract":"In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach’s use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.","PeriodicalId":55574,"journal":{"name":"Artificial Life","volume":"29 4","pages":"468-486"},"PeriodicalIF":1.6000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508338","citationCount":"0","resultStr":"{\"title\":\"Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study\",\"authors\":\"Michael Heider;Helena Stegherr;Richard Nordsieck;Jörg Hähner\",\"doi\":\"10.1162/artl_a_00414\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach’s use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.\",\"PeriodicalId\":55574,\"journal\":{\"name\":\"Artificial Life\",\"volume\":\"29 4\",\"pages\":\"468-486\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2023-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508338\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Life\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10508338/\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Life","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10508338/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Assessing Model Requirements for Explainable AI: A Template and Exemplary Case Study
In sociotechnical settings, human operators are increasingly assisted by decision support systems. By employing such systems, important properties of sociotechnical systems, such as self-adaptation and self-optimization, are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this article, we propose the use of learning classifier systems (LCSs), a family of rule-based machine learning methods, to facilitate and highlight techniques to improve transparent decision-making. Furthermore, we present a novel approach to assessing application-specific explainability needs for the design of LCS models. For this, we propose an application-independent template of seven questions. We demonstrate the approach’s use in an interview-based case study for a manufacturing scenario. We find that the answers received do yield useful insights for a well-designed LCS model and requirements for stakeholders to engage actively with an intelligent agent.
期刊介绍:
Artificial Life, launched in the fall of 1993, has become the unifying forum for the exchange of scientific information on the study of artificial systems that exhibit the behavioral characteristics of natural living systems, through the synthesis or simulation using computational (software), robotic (hardware), and/or physicochemical (wetware) means. Each issue features cutting-edge research on artificial life that advances the state-of-the-art of our knowledge about various aspects of living systems such as:
Artificial chemistry and the origins of life
Self-assembly, growth, and development
Self-replication and self-repair
Systems and synthetic biology
Perception, cognition, and behavior
Embodiment and enactivism
Collective behaviors of swarms
Evolutionary and ecological dynamics
Open-endedness and creativity
Social organization and cultural evolution
Societal and technological implications
Philosophy and aesthetics
Applications to biology, medicine, business, education, or entertainment.