Toward robust systems against sensor-based adversarial examples based on the criticalities of sensors.

Ade Kurniawan, Y. Ohsita, Masayuki Murata
{"title":"Toward robust systems against sensor-based adversarial examples based on the criticalities of sensors.","authors":"Ade Kurniawan, Y. Ohsita, Masayuki Murata","doi":"10.1109/ICAIC60265.2024.10433806","DOIUrl":null,"url":null,"abstract":"In multi-sensor systems, certain sensors could have vulnerabilities that may be exploited to produce AEs. However, it is difficult to protect all sensor devices, because the risk of the existence of vulnerable sensor devices increases as the number of sensor devices increases. Therefore, we need a method to protect ML models even if a part of the sensors are compromised by the attacker. One approach is to detect the sensors used by the attacks and remove the detected sensors. However, such reactive defense method has limitations. If some critical sensors that are necessary to distinguish required states are compromised by the attacker, we cannot obtain the suitable output. In this paper, we discuss a strategy to make the system robust against AEs proactively. A system with enough redundancy can work after removing the features from the sensors used in the AEs. That is, we need a metric to check if the system has enough redundancy. In this paper, we define groups of sensors that might be compromised by the same attacker, and we propose a metric called criticality that indicates how important each group of sensors are for classification between two classes. Based on the criticality, we can make the system robust against sensor-based AEs by interactively adding sensors so as to decrease the criticality of any groups of sensors for the classes that must be distinguished.","PeriodicalId":517265,"journal":{"name":"2024 IEEE 3rd International Conference on AI in Cybersecurity (ICAIC)","volume":"26 5","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 IEEE 3rd International Conference on AI in Cybersecurity (ICAIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIC60265.2024.10433806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In multi-sensor systems, certain sensors could have vulnerabilities that may be exploited to produce AEs. However, it is difficult to protect all sensor devices, because the risk of the existence of vulnerable sensor devices increases as the number of sensor devices increases. Therefore, we need a method to protect ML models even if a part of the sensors are compromised by the attacker. One approach is to detect the sensors used by the attacks and remove the detected sensors. However, such reactive defense method has limitations. If some critical sensors that are necessary to distinguish required states are compromised by the attacker, we cannot obtain the suitable output. In this paper, we discuss a strategy to make the system robust against AEs proactively. A system with enough redundancy can work after removing the features from the sensors used in the AEs. That is, we need a metric to check if the system has enough redundancy. In this paper, we define groups of sensors that might be compromised by the same attacker, and we propose a metric called criticality that indicates how important each group of sensors are for classification between two classes. Based on the criticality, we can make the system robust against sensor-based AEs by interactively adding sensors so as to decrease the criticality of any groups of sensors for the classes that must be distinguished.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于传感器的临界性,开发针对基于传感器的对抗性实例的鲁棒系统。
在多传感器系统中,某些传感器可能存在漏洞,可能会被利用来产生 AE。然而,要保护所有传感器设备是很困难的,因为随着传感器设备数量的增加,存在漏洞的传感器设备的风险也会增加。因此,我们需要一种方法来保护 ML 模型,即使部分传感器被攻击者破坏。一种方法是检测攻击所使用的传感器,并移除检测到的传感器。然而,这种被动防御方法有其局限性。如果一些区分所需状态的关键传感器被攻击者破坏,我们就无法获得合适的输出。在本文中,我们讨论了一种使系统主动抵御 AE 的策略。一个具有足够冗余度的系统可以在去除 AE 所用传感器的特征后正常工作。也就是说,我们需要一个指标来检查系统是否有足够的冗余度。在本文中,我们定义了可能会被同一攻击者入侵的传感器组,并提出了一种称为临界度的指标,它表明了每组传感器对于两个类别之间的分类有多重要。根据临界度,我们可以通过交互式添加传感器来降低任何一组传感器对必须区分的类别的临界度,从而使系统对基于传感器的 AE 具有鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
AI-Based Cybersecurity Policies and Procedures Leveraging Advanced Visual Recognition Classifier For Pneumonia Prediction Risk-Aware Mobile App Security Testing: Safeguarding Sensitive User Inputs CANAL - Cyber Activity News Alerting Language Model : Empirical Approach vs. Expensive LLMs Link-based Anomaly Detection with Sysmon and Graph Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1