Abeer Alshehri, Amal Abdulrahman, Hajar Alamri, Tim Miller, Mor Vered
{"title":"Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach","authors":"Abeer Alshehri, Amal Abdulrahman, Hajar Alamri, Tim Miller, Mor Vered","doi":"arxiv-2409.11675","DOIUrl":null,"url":null,"abstract":"Goal recognition (GR) involves inferring an agent's unobserved goal from a\nsequence of observations. This is a critical problem in AI with diverse\napplications. Traditionally, GR has been addressed using 'inference to the best\nexplanation' or abduction, where hypotheses about the agent's goals are\ngenerated as the most plausible explanations for observed behavior.\nAlternatively, some approaches enhance interpretability by ensuring that an\nagent's behavior aligns with an observer's expectations or by making the\nreasoning behind decisions more transparent. In this work, we tackle a\ndifferent challenge: explaining the GR process in a way that is comprehensible\nto humans. We introduce and evaluate an explainable model for goal recognition\n(GR) agents, grounded in the theoretical framework and cognitive processes\nunderlying human behavior explanation. Drawing on insights from two human-agent\nstudies, we propose a conceptual framework for human-centered explanations of\nGR. Using this framework, we develop the eXplainable Goal Recognition (XGR)\nmodel, which generates explanations for both why and why not questions. We\nevaluate the model computationally across eight GR benchmarks and through three\nuser studies. The first study assesses the efficiency of generating human-like\nexplanations within the Sokoban game domain, the second examines perceived\nexplainability in the same domain, and the third evaluates the model's\neffectiveness in aiding decision-making in illegal fishing detection. Results\ndemonstrate that the XGR model significantly enhances user understanding,\ntrust, and decision-making compared to baseline models, underscoring its\npotential to improve human-agent collaboration.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11675","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Goal recognition (GR) involves inferring an agent's unobserved goal from a
sequence of observations. This is a critical problem in AI with diverse
applications. Traditionally, GR has been addressed using 'inference to the best
explanation' or abduction, where hypotheses about the agent's goals are
generated as the most plausible explanations for observed behavior.
Alternatively, some approaches enhance interpretability by ensuring that an
agent's behavior aligns with an observer's expectations or by making the
reasoning behind decisions more transparent. In this work, we tackle a
different challenge: explaining the GR process in a way that is comprehensible
to humans. We introduce and evaluate an explainable model for goal recognition
(GR) agents, grounded in the theoretical framework and cognitive processes
underlying human behavior explanation. Drawing on insights from two human-agent
studies, we propose a conceptual framework for human-centered explanations of
GR. Using this framework, we develop the eXplainable Goal Recognition (XGR)
model, which generates explanations for both why and why not questions. We
evaluate the model computationally across eight GR benchmarks and through three
user studies. The first study assesses the efficiency of generating human-like
explanations within the Sokoban game domain, the second examines perceived
explainability in the same domain, and the third evaluates the model's
effectiveness in aiding decision-making in illegal fishing detection. Results
demonstrate that the XGR model significantly enhances user understanding,
trust, and decision-making compared to baseline models, underscoring its
potential to improve human-agent collaboration.
目标识别(GR)涉及从一系列观察结果中推断出一个代理的未观察到的目标。这是人工智能中的一个关键问题,有着多种多样的应用。传统上,人们使用 "最佳解释推理 "或归纳法来解决目标识别问题,在这种方法中,关于代理目标的假设被生成为对观察到的行为最合理的解释。在这项工作中,我们要解决一个不同的挑战:以人类能够理解的方式解释 GR 过程。我们以人类行为解释的理论框架和认知过程为基础,介绍并评估了目标识别(GR)代理的可解释模型。借鉴两项人类代理研究的见解,我们提出了一个以人为中心解释目标识别代理的概念框架。利用这个框架,我们开发了可解释目标识别(XGR)模型,该模型可以对为什么和为什么不的问题进行解释。我们通过八项 GR 基准和三项用户研究对该模型进行了计算评估。第一项研究评估了在推箱子游戏领域生成类人解释的效率,第二项研究考察了同一领域的感知可解释性,第三项研究评估了该模型在非法捕鱼检测中辅助决策的有效性。结果表明,与基线模型相比,XGR 模型大大增强了用户的理解力、信任度和决策能力,突出了该模型在改善人机协作方面的潜力。