Love Allen Chijioke Ahakonye, C. I. Nwakanma, Jae Min Lee, Dong‐Seong Kim
{"title":"Machine Learning Explainability for Intrusion Detection in the Industrial Internet of Things","authors":"Love Allen Chijioke Ahakonye, C. I. Nwakanma, Jae Min Lee, Dong‐Seong Kim","doi":"10.1109/IOTM.001.2300171","DOIUrl":null,"url":null,"abstract":"Intrusion and attacks have consistently challenged the Industrial Internet of Things (IIoT). Although artificial intelligence (AI) rapidly develops in attack detection and mitigation, building convincing trust is difficult due to its black-box nature. Its unexplained outcome inhibits informed and adequate decision-making of the experts and stakeholders. Explainable AI (XAI) has emerged to help with this problem. However, the ease of comprehensibility of XAI interpretation remains an issue due to the complexity and reliance on statistical theories. This study integrates agnostic post-hoc LIME and SHAP explainability approaches on intrusion detection systems built using representative AI models to explain the model's decisions and provide more insights into interpretability. The decision and confidence impact ratios assessed the significance of features and model dependencies, enhancing cybersecurity experts' trust and informed decisions. The experimental findings highlight the importance of these explainability techniques for understanding and mitigating IIoT intrusions with recourse to significant data features and model decisions.","PeriodicalId":235472,"journal":{"name":"IEEE Internet of Things Magazine","volume":"46 10","pages":"68-74"},"PeriodicalIF":0.0000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Magazine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IOTM.001.2300171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Intrusion and attacks have consistently challenged the Industrial Internet of Things (IIoT). Although artificial intelligence (AI) rapidly develops in attack detection and mitigation, building convincing trust is difficult due to its black-box nature. Its unexplained outcome inhibits informed and adequate decision-making of the experts and stakeholders. Explainable AI (XAI) has emerged to help with this problem. However, the ease of comprehensibility of XAI interpretation remains an issue due to the complexity and reliance on statistical theories. This study integrates agnostic post-hoc LIME and SHAP explainability approaches on intrusion detection systems built using representative AI models to explain the model's decisions and provide more insights into interpretability. The decision and confidence impact ratios assessed the significance of features and model dependencies, enhancing cybersecurity experts' trust and informed decisions. The experimental findings highlight the importance of these explainability techniques for understanding and mitigating IIoT intrusions with recourse to significant data features and model decisions.