{"title":"Crafting explainable artificial intelligence through active inference: A model for transparent introspection and decision-making","authors":"José Gabriel Carrasco Ramírez","doi":"10.60087/jaigs.vol4.issue1.p26","DOIUrl":null,"url":null,"abstract":"This paper explores the feasibility of constructing interpretable artificial intelligence (AI) systems rooted in active inference and the free energy principle. Initially, we offer a concise introduction to active inference, emphasizing its relevance to modeling decision-making, introspection, and the generation of both overt and covert actions. Subsequently, we delve into how active inference can serve as a foundation for designing explainable AI systems. Specifically, it enables us to capture essential aspects of \"introspective\" processes and generate intelligible models of decision-making mechanisms. We propose an architectural framework for explainable AI systems employing active inference. Central to this framework is an explicit hierarchical generative model that enables the AI system to monitor and elucidate the factors influencing its decisions. Importantly, this model's structure is designed to be understandable and verifiable by human users. We elucidate how this architecture can amalgamate diverse data sources to make informed decisions in a transparent manner, mirroring aspects of human consciousness and introspection. Finally, we examine the implications of our findings for future AI research and discuss potential ethical considerations associated with developing AI systems with (apparent) introspective capabilities.","PeriodicalId":517201,"journal":{"name":"Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023","volume":"62 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.60087/jaigs.vol4.issue1.p26","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper explores the feasibility of constructing interpretable artificial intelligence (AI) systems rooted in active inference and the free energy principle. Initially, we offer a concise introduction to active inference, emphasizing its relevance to modeling decision-making, introspection, and the generation of both overt and covert actions. Subsequently, we delve into how active inference can serve as a foundation for designing explainable AI systems. Specifically, it enables us to capture essential aspects of "introspective" processes and generate intelligible models of decision-making mechanisms. We propose an architectural framework for explainable AI systems employing active inference. Central to this framework is an explicit hierarchical generative model that enables the AI system to monitor and elucidate the factors influencing its decisions. Importantly, this model's structure is designed to be understandable and verifiable by human users. We elucidate how this architecture can amalgamate diverse data sources to make informed decisions in a transparent manner, mirroring aspects of human consciousness and introspection. Finally, we examine the implications of our findings for future AI research and discuss potential ethical considerations associated with developing AI systems with (apparent) introspective capabilities.