{"title":"Step-by-Step Task Plan Explanations Beyond Causal Links","authors":"F. Lindner, Conny Olz","doi":"10.1109/RO-MAN53752.2022.9900590","DOIUrl":null,"url":null,"abstract":"Explainable robotics refers to the challenge of designing robots that can make their decisions transparent to humans. Recently, a number of approaches to task plan explanation have been proposed, which enable robots to explain each step in their plan to humans. These approaches have in common that they are based on the causal links in the plan. We discuss problems with using causal links for plan explanation. Particularly, their inability to distinguish enabling actions from requiring actions can lead to counter-intuitive explanations. We propose an extension that allows for making this relevant distinction and demonstrate how it can be applied to create a robot that explains its actions.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN53752.2022.9900590","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Explainable robotics refers to the challenge of designing robots that can make their decisions transparent to humans. Recently, a number of approaches to task plan explanation have been proposed, which enable robots to explain each step in their plan to humans. These approaches have in common that they are based on the causal links in the plan. We discuss problems with using causal links for plan explanation. Particularly, their inability to distinguish enabling actions from requiring actions can lead to counter-intuitive explanations. We propose an extension that allows for making this relevant distinction and demonstrate how it can be applied to create a robot that explains its actions.