Counterfactual Explanation and Causal Inference In Service of Robustness in Robot Control

Simón C. Smith, S. Ramamoorthy
{"title":"Counterfactual Explanation and Causal Inference In Service of Robustness in Robot Control","authors":"Simón C. Smith, S. Ramamoorthy","doi":"10.1109/ICDL-EpiRob48136.2020.9278061","DOIUrl":null,"url":null,"abstract":"We propose an architecture for training generative models of counterfactual conditionals of the form, ‘can we modify event A to cause B instead of C?’, motivated by applications in robot control. Using an ‘adversarial training’ paradigm, an image-based deep neural network model is trained to produce small and realistic modifications to an original image in order to cause user-defined effects. These modifications can be used in the design process of image-based robust control - to determine the ability of the controller to return to a working regime by modifications in the input space, rather than by adaptation. In contrast to conventional control design approaches, where robustness is quantified in terms of the ability to reject noise, we explore the space of counterfactuals that might cause a certain requirement to be violated, thus proposing an alternative model that might be more expressive in certain robotics applications. So, we propose the generation of counterfactuals as an approach to explanation of black-box models and the envisioning of potential movement paths in autonomous robotic control. Firstly, we demonstrate this approach in a set of classification tasks, using the well known MNIST and CelebFaces Attributes datasets. Then, addressing multi-dimensional regression, we demonstrate our approach in a reaching task with a physical robot, and in a navigation task with a robot in a digital twin simulation.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

We propose an architecture for training generative models of counterfactual conditionals of the form, ‘can we modify event A to cause B instead of C?’, motivated by applications in robot control. Using an ‘adversarial training’ paradigm, an image-based deep neural network model is trained to produce small and realistic modifications to an original image in order to cause user-defined effects. These modifications can be used in the design process of image-based robust control - to determine the ability of the controller to return to a working regime by modifications in the input space, rather than by adaptation. In contrast to conventional control design approaches, where robustness is quantified in terms of the ability to reject noise, we explore the space of counterfactuals that might cause a certain requirement to be violated, thus proposing an alternative model that might be more expressive in certain robotics applications. So, we propose the generation of counterfactuals as an approach to explanation of black-box models and the envisioning of potential movement paths in autonomous robotic control. Firstly, we demonstrate this approach in a set of classification tasks, using the well known MNIST and CelebFaces Attributes datasets. Then, addressing multi-dimensional regression, we demonstrate our approach in a reaching task with a physical robot, and in a navigation task with a robot in a digital twin simulation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
机器人鲁棒控制中的反事实解释与因果推理
我们提出了一种架构,用于训练反事实条件的生成模型,其形式为“我们可以修改事件A以导致B而不是C吗?”,其动机是机器人控制方面的应用。使用“对抗训练”范式,训练基于图像的深度神经网络模型对原始图像进行小而现实的修改,以产生用户定义的效果。这些修改可以用于基于图像的鲁棒控制的设计过程中-通过修改输入空间来确定控制器返回工作状态的能力,而不是通过自适应。与传统的控制设计方法相比,鲁棒性是根据拒绝噪声的能力来量化的,我们探索了可能导致某些要求被违反的反事实空间,从而提出了一个在某些机器人应用中可能更具表现力的替代模型。因此,我们提出反事实的生成作为一种解释黑盒模型和设想自主机器人控制中潜在运动路径的方法。首先,我们在一组分类任务中演示了这种方法,使用了众所周知的MNIST和CelebFaces Attributes数据集。然后,为了解决多维回归问题,我们在物理机器人的到达任务中演示了我们的方法,并在数字双胞胎仿真中演示了机器人的导航任务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
High-level representations through unconstrained sensorimotor learning Language Acquisition with Echo State Networks: Towards Unsupervised Learning Picture completion reveals developmental change in representational drawing ability: An analysis using a convolutional neural network Fast Developmental Stereo-Disparity Detectors Modeling robot co-representation: state-of-the-art, open issues, and predictive learning as a possible framework
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1