Modeling and Analysis of Explanation for Secure Industrial Control Systems

IF 2.2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE ACM Transactions on Autonomous and Adaptive Systems Pub Date : 2022-12-15 DOI:https://dl.acm.org/doi/10.1145/3557898
Sridhar Adepu, Nianyu Li, Eunsuk Kang, David Garlan
{"title":"Modeling and Analysis of Explanation for Secure Industrial Control Systems","authors":"Sridhar Adepu, Nianyu Li, Eunsuk Kang, David Garlan","doi":"https://dl.acm.org/doi/10.1145/3557898","DOIUrl":null,"url":null,"abstract":"<p>Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and detect problems that the system is unaware of. One way of achieving this synergy is by placing the human operator <i>on the loop</i>—i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, an <i>explanation</i> can play an important role in allowing the human operator to understand why the system is making certain decisions and improve the level of knowledge that the operator has about the system. This, in turn, may improve the operator’s capability to intervene and, if necessary, override the decisions being made by the system. However, explanations may incur costs, in terms of delay in actions and the possibility that a human may make a bad judgment. Hence, it is not always obvious whether an explanation will improve overall utility and, if so, then what kind of explanation should be provided to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation <i>content</i>, <i>effect</i>, and <i>cost</i>. We then present a dynamic system adaptation approach that leverages a probabilistic reasoning technique to determine when an explanation should be used to improve overall system utility. We evaluate our explanation framework in the context of a realistic industrial control system with adaptive behaviors.</p>","PeriodicalId":50919,"journal":{"name":"ACM Transactions on Autonomous and Adaptive Systems","volume":"8 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Autonomous and Adaptive Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/https://dl.acm.org/doi/10.1145/3557898","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and detect problems that the system is unaware of. One way of achieving this synergy is by placing the human operator on the loop—i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, an explanation can play an important role in allowing the human operator to understand why the system is making certain decisions and improve the level of knowledge that the operator has about the system. This, in turn, may improve the operator’s capability to intervene and, if necessary, override the decisions being made by the system. However, explanations may incur costs, in terms of delay in actions and the possibility that a human may make a bad judgment. Hence, it is not always obvious whether an explanation will improve overall utility and, if so, then what kind of explanation should be provided to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic system adaptation approach that leverages a probabilistic reasoning technique to determine when an explanation should be used to improve overall system utility. We evaluate our explanation framework in the context of a realistic industrial control system with adaptive behaviors.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
安全工业控制系统的建模与解释分析
许多自适应系统受益于人类的参与和监督,其中人类操作员可以提供系统无法获得的专业知识,并检测系统未意识到的问题。实现这种协同作用的一种方法是将人工操作员置于循环中。在有问题的适应决策的情况下,提供监督和干预。为了使这种交互有效,解释可以发挥重要作用,使人类操作员能够理解系统为什么做出某些决策,并提高操作员对系统的知识水平。反过来,这可以提高作业者的干预能力,并在必要时推翻系统做出的决定。然而,解释可能会产生成本,比如行动的延迟和人类可能做出错误判断的可能性。因此,解释是否会提高整体效用并不总是显而易见的,如果是,那么应该向运营商提供什么样的解释。在这项工作中,我们定义了一个正式的框架来解释适应性系统行为和它们被保证的条件。具体来说,我们从解释的内容、效果和成本三个方面来描述解释的特征。然后,我们提出了一种动态系统适应方法,该方法利用概率推理技术来确定何时应该使用解释来提高整体系统效用。我们在具有自适应行为的现实工业控制系统的背景下评估我们的解释框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ACM Transactions on Autonomous and Adaptive Systems
ACM Transactions on Autonomous and Adaptive Systems 工程技术-计算机:理论方法
CiteScore
4.80
自引率
7.40%
发文量
9
审稿时长
>12 weeks
期刊介绍: TAAS addresses research on autonomous and adaptive systems being undertaken by an increasingly interdisciplinary research community -- and provides a common platform under which this work can be published and disseminated. TAAS encourages contributions aimed at supporting the understanding, development, and control of such systems and of their behaviors. TAAS addresses research on autonomous and adaptive systems being undertaken by an increasingly interdisciplinary research community - and provides a common platform under which this work can be published and disseminated. TAAS encourages contributions aimed at supporting the understanding, development, and control of such systems and of their behaviors. Contributions are expected to be based on sound and innovative theoretical models, algorithms, engineering and programming techniques, infrastructures and systems, or technological and application experiences.
期刊最新文献
IBAQ: Frequency-Domain Backdoor Attack Threatening Autonomous Driving via Quadratic Phase Adaptive Scheduling of High-Availability Drone Swarms for Congestion Alleviation in Connected Automated Vehicles Self-Supervised Machine Learning Framework for Online Container Security Attack Detection A Framework for Simultaneous Task Allocation and Planning under Uncertainty Adaptation in Edge Computing: A review on design principles and research challenges
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1