通过主动推理打造可解释的人工智能:透明自省和决策模型

José Gabriel Carrasco Ramírez
{"title":"通过主动推理打造可解释的人工智能:透明自省和决策模型","authors":"José Gabriel Carrasco Ramírez","doi":"10.60087/jaigs.vol4.issue1.p26","DOIUrl":null,"url":null,"abstract":"This paper explores the feasibility of constructing interpretable artificial intelligence (AI) systems rooted in active inference and the free energy principle. Initially, we offer a concise introduction to active inference, emphasizing its relevance to modeling decision-making, introspection, and the generation of both overt and covert actions. Subsequently, we delve into how active inference can serve as a foundation for designing explainable AI systems. Specifically, it enables us to capture essential aspects of \"introspective\" processes and generate intelligible models of decision-making mechanisms. We propose an architectural framework for explainable AI systems employing active inference. Central to this framework is an explicit hierarchical generative model that enables the AI system to monitor and elucidate the factors influencing its decisions. Importantly, this model's structure is designed to be understandable and verifiable by human users. We elucidate how this architecture can amalgamate diverse data sources to make informed decisions in a transparent manner, mirroring aspects of human consciousness and introspection. Finally, we examine the implications of our findings for future AI research and discuss potential ethical considerations associated with developing AI systems with (apparent) introspective capabilities.","PeriodicalId":517201,"journal":{"name":"Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023","volume":"62 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Crafting explainable artificial intelligence through active inference: A model for transparent introspection and decision-making\",\"authors\":\"José Gabriel Carrasco Ramírez\",\"doi\":\"10.60087/jaigs.vol4.issue1.p26\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper explores the feasibility of constructing interpretable artificial intelligence (AI) systems rooted in active inference and the free energy principle. Initially, we offer a concise introduction to active inference, emphasizing its relevance to modeling decision-making, introspection, and the generation of both overt and covert actions. Subsequently, we delve into how active inference can serve as a foundation for designing explainable AI systems. Specifically, it enables us to capture essential aspects of \\\"introspective\\\" processes and generate intelligible models of decision-making mechanisms. We propose an architectural framework for explainable AI systems employing active inference. Central to this framework is an explicit hierarchical generative model that enables the AI system to monitor and elucidate the factors influencing its decisions. Importantly, this model's structure is designed to be understandable and verifiable by human users. We elucidate how this architecture can amalgamate diverse data sources to make informed decisions in a transparent manner, mirroring aspects of human consciousness and introspection. Finally, we examine the implications of our findings for future AI research and discuss potential ethical considerations associated with developing AI systems with (apparent) introspective capabilities.\",\"PeriodicalId\":517201,\"journal\":{\"name\":\"Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023\",\"volume\":\"62 4\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.60087/jaigs.vol4.issue1.p26\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.60087/jaigs.vol4.issue1.p26","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文探讨了基于主动推理和自由能原理构建可解释人工智能(AI)系统的可行性。首先,我们简明扼要地介绍了主动推理,强调了它与决策建模、内省以及公开和隐蔽行动生成的相关性。随后,我们将深入探讨主动推理如何成为设计可解释人工智能系统的基础。具体来说,它能让我们捕捉到 "内省 "过程的重要方面,并生成可理解的决策机制模型。我们为采用主动推理的可解释人工智能系统提出了一个架构框架。这个框架的核心是一个明确的分层生成模型,它能让人工智能系统监测并阐明影响其决策的因素。重要的是,该模型的结构旨在让人类用户能够理解和验证。我们将阐释这一架构如何以透明的方式整合各种数据源,从而做出明智的决策,反映出人类意识和内省的方方面面。最后,我们探讨了我们的发现对未来人工智能研究的影响,并讨论了与开发具有(明显)内省能力的人工智能系统相关的潜在伦理考虑因素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Crafting explainable artificial intelligence through active inference: A model for transparent introspection and decision-making
This paper explores the feasibility of constructing interpretable artificial intelligence (AI) systems rooted in active inference and the free energy principle. Initially, we offer a concise introduction to active inference, emphasizing its relevance to modeling decision-making, introspection, and the generation of both overt and covert actions. Subsequently, we delve into how active inference can serve as a foundation for designing explainable AI systems. Specifically, it enables us to capture essential aspects of "introspective" processes and generate intelligible models of decision-making mechanisms. We propose an architectural framework for explainable AI systems employing active inference. Central to this framework is an explicit hierarchical generative model that enables the AI system to monitor and elucidate the factors influencing its decisions. Importantly, this model's structure is designed to be understandable and verifiable by human users. We elucidate how this architecture can amalgamate diverse data sources to make informed decisions in a transparent manner, mirroring aspects of human consciousness and introspection. Finally, we examine the implications of our findings for future AI research and discuss potential ethical considerations associated with developing AI systems with (apparent) introspective capabilities.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LLM-Cloud Complete: Leveraging Cloud Computing for Efficient Large Language Model-based Code Completion Utilizing the Internet of Things (IoT), Artificial Intelligence, Machine Learning, and Vehicle Telematics for Sustainable Growth in Small and Medium Firms (SMEs) Role of Artificial Intelligence and Big Data in Sustainable Entrepreneurship Impact of AI on Education: Innovative Tools and Trends Critique of Modern Feminism
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1