强化学习的半成品解释

Jasmina Gajcin, Jovan Jeromela, Ivana Dusparic
{"title":"强化学习的半成品解释","authors":"Jasmina Gajcin, Jovan Jeromela, Ivana Dusparic","doi":"arxiv-2409.05435","DOIUrl":null,"url":null,"abstract":"Reinforcement Learning (RL) is a learning paradigm in which the agent learns\nfrom its environment through trial and error. Deep reinforcement learning (DRL)\nalgorithms represent the agent's policies using neural networks, making their\ndecisions difficult to interpret. Explaining the behaviour of DRL agents is\nnecessary to advance user trust, increase engagement, and facilitate\nintegration with real-life tasks. Semifactual explanations aim to explain an\noutcome by providing \"even if\" scenarios, such as \"even if the car were moving\ntwice as slowly, it would still have to swerve to avoid crashing\". Semifactuals\nhelp users understand the effects of different factors on the outcome and\nsupport the optimisation of resources. While extensively studied in psychology\nand even utilised in supervised learning, semifactuals have not been used to\nexplain the decisions of RL systems. In this work, we develop a first approach\nto generating semifactual explanations for RL agents. We start by defining five\nproperties of desirable semifactual explanations in RL and then introducing\nSGRL-Rewind and SGRL-Advance, the first algorithms for generating semifactual\nexplanations in RL. We evaluate the algorithms in two standard RL environments\nand find that they generate semifactuals that are easier to reach, represent\nthe agent's policy better, and are more diverse compared to baselines. Lastly,\nwe conduct and analyse a user study to assess the participant's perception of\nsemifactual explanations of the agent's actions.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Semifactual Explanations for Reinforcement Learning\",\"authors\":\"Jasmina Gajcin, Jovan Jeromela, Ivana Dusparic\",\"doi\":\"arxiv-2409.05435\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement Learning (RL) is a learning paradigm in which the agent learns\\nfrom its environment through trial and error. Deep reinforcement learning (DRL)\\nalgorithms represent the agent's policies using neural networks, making their\\ndecisions difficult to interpret. Explaining the behaviour of DRL agents is\\nnecessary to advance user trust, increase engagement, and facilitate\\nintegration with real-life tasks. Semifactual explanations aim to explain an\\noutcome by providing \\\"even if\\\" scenarios, such as \\\"even if the car were moving\\ntwice as slowly, it would still have to swerve to avoid crashing\\\". Semifactuals\\nhelp users understand the effects of different factors on the outcome and\\nsupport the optimisation of resources. While extensively studied in psychology\\nand even utilised in supervised learning, semifactuals have not been used to\\nexplain the decisions of RL systems. In this work, we develop a first approach\\nto generating semifactual explanations for RL agents. We start by defining five\\nproperties of desirable semifactual explanations in RL and then introducing\\nSGRL-Rewind and SGRL-Advance, the first algorithms for generating semifactual\\nexplanations in RL. We evaluate the algorithms in two standard RL environments\\nand find that they generate semifactuals that are easier to reach, represent\\nthe agent's policy better, and are more diverse compared to baselines. Lastly,\\nwe conduct and analyse a user study to assess the participant's perception of\\nsemifactual explanations of the agent's actions.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.05435\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05435","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

强化学习(RL)是一种学习范式,在这种范式中,代理通过试错从环境中学习。深度强化学习(DRL)算法使用神经网络表示代理的策略,因此很难解释其决策。解释 DRL 代理的行为对于提高用户信任度、增加参与度以及促进与现实生活任务的整合非常必要。半事实性解释旨在通过提供 "即使 "场景来解释结果,例如 "即使汽车的速度慢了两倍,它仍然必须转弯以避免撞车"。半事实帮助用户理解不同因素对结果的影响,并支持资源优化。虽然半事实在心理学中得到了广泛的研究,甚至在监督学习中也得到了应用,但是半事实还没有被用于解释 RL 系统的决策。在这项工作中,我们开发了第一种为 RL 代理生成半事实解释的方法。我们首先定义了 RL 中理想的半事实解释的五个属性,然后介绍了 SGRL-Rewind 和 SGRL-Advance,它们是在 RL 中生成半事实解释的第一种算法。我们在两个标准的 RL 环境中对这两种算法进行了评估,发现与基线算法相比,这两种算法生成的半事实解释更容易达成,能更好地代表代理的策略,而且更加多样化。最后,我们开展并分析了一项用户研究,以评估参与者对代理行动的半事实解释的感知。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Semifactual Explanations for Reinforcement Learning
Reinforcement Learning (RL) is a learning paradigm in which the agent learns from its environment through trial and error. Deep reinforcement learning (DRL) algorithms represent the agent's policies using neural networks, making their decisions difficult to interpret. Explaining the behaviour of DRL agents is necessary to advance user trust, increase engagement, and facilitate integration with real-life tasks. Semifactual explanations aim to explain an outcome by providing "even if" scenarios, such as "even if the car were moving twice as slowly, it would still have to swerve to avoid crashing". Semifactuals help users understand the effects of different factors on the outcome and support the optimisation of resources. While extensively studied in psychology and even utilised in supervised learning, semifactuals have not been used to explain the decisions of RL systems. In this work, we develop a first approach to generating semifactual explanations for RL agents. We start by defining five properties of desirable semifactual explanations in RL and then introducing SGRL-Rewind and SGRL-Advance, the first algorithms for generating semifactual explanations in RL. We evaluate the algorithms in two standard RL environments and find that they generate semifactuals that are easier to reach, represent the agent's policy better, and are more diverse compared to baselines. Lastly, we conduct and analyse a user study to assess the participant's perception of semifactual explanations of the agent's actions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Abductive explanations of classifiers under constraints: Complexity and properties Explaining Non-monotonic Normative Reasoning using Argumentation Theory with Deontic Logic Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach A Metric Hybrid Planning Approach to Solving Pandemic Planning Problems with Simple SIR Models Neural Networks for Vehicle Routing Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1