Inga Ibs , Claire Ott , Frank Jäkel, Constantin A. Rothkopf
{"title":"从人类解释到可解释的人工智能:约束优化的启示","authors":"Inga Ibs , Claire Ott , Frank Jäkel, Constantin A. Rothkopf","doi":"10.1016/j.cogsys.2024.101297","DOIUrl":null,"url":null,"abstract":"<div><div>Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset (<span><math><mrow><mo>></mo><mn>150</mn></mrow></math></span> participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"88 ","pages":"Article 101297"},"PeriodicalIF":2.1000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From human explanations to explainable AI: Insights from constrained optimization\",\"authors\":\"Inga Ibs , Claire Ott , Frank Jäkel, Constantin A. Rothkopf\",\"doi\":\"10.1016/j.cogsys.2024.101297\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset (<span><math><mrow><mo>></mo><mn>150</mn></mrow></math></span> participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems.</div></div>\",\"PeriodicalId\":55242,\"journal\":{\"name\":\"Cognitive Systems Research\",\"volume\":\"88 \",\"pages\":\"Article 101297\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Systems Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041724000913\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041724000913","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
From human explanations to explainable AI: Insights from constrained optimization
Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset ( participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems.
期刊介绍:
Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial.
The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition.
Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.