Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid

Santosh Nirmal , Pramod Patil , Sagar Shinde
{"title":"Adversarial measurements for convolutional neural network-based energy theft detection model in smart grid","authors":"Santosh Nirmal ,&nbsp;Pramod Patil ,&nbsp;Sagar Shinde","doi":"10.1016/j.prime.2025.100909","DOIUrl":null,"url":null,"abstract":"<div><div>Electricity theft has become a major problem worldwide and is a significant headache for utility companies. It not only results in revenue loss but also disrupts the quality of electricity, increases generation costs, and raises overall electricity prices. Electricity or Energy theft detection (ETD) systems utilizing machine learning, particularly those employing neural networks, have high accuracy and have become popular in literature, achieving higher detection performance. Recent studies reveal that machine learning and deep learning models are vulnerable. Day by day, different attack techniques are coming up in different fields, including energy, financial, etc. As the use of machine learning for energy theft detection has grown, it has become important to explore its weaknesses. Research has shown that most of the ETD models are vulnerable to evasion attacks (EA). Its goal is to reduce electricity costs by deceiving the model into recognizing a fraudulent customer as legitimate.</div><div>In this paper, four different experiments are conducted in which we check the performance of Convolutional Neural Network and adaboost (CNN-Adaboost) ETD system. Then, we design an evasion attack to assess the model's performance under attack. The attack comprises two methods: the first is we originally propose a novel Adversarial Data Generation Method (ADGM), which is an algorithm designed to generate adversarial data, and the other is Fast Gradient Sign Method (FGSM). In the third scenario, test the attack success rate on different percentages of malicious consumers. Finally, the performance of CNN-Adaboost and other state-of-the-art methods is tested and compared using 10 % and 20 % adversarial data. Our proposed attack is validated with State Grid Corporation of China (SGCC) dataset.</div><div>ADGM and FGSM attack models generate adversarial evasion attack samples by modifying the benign sample along with already available malicious data. These samples are transferred to the surrogate model in order to test how efficiently it works on malicious data, and we forward only those data that successfully deceive the surrogate model. The CNN_Adaboost ETD model's overall performance significantly decreased for both methods. The accuracy reduced up to 53.61 % from 96.3 % for ADGM and 63.42 % for FGSM and the transferability rates are 95.82 % and 90.68 % for ADGM and FGSM, respectively. Our findings reveal that the attack success rate (ASR) of ADGM is 94.11 % which is better than FGSM. It is also observed that as the percentage of adversarial data increased, the accuracy of the models decreased. The accuracy of CNN-Adaboost, initially 96.3 %, decreased to 85.45 % and 79.43 % for 10 % and 20 % adversarial data, respectively. These adversaries are transferable and are useful for designing robust and secure machine learning (ML) models.</div></div>","PeriodicalId":100488,"journal":{"name":"e-Prime - Advances in Electrical Engineering, Electronics and Energy","volume":"11 ","pages":"Article 100909"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"e-Prime - Advances in Electrical Engineering, Electronics and Energy","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772671125000166","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Electricity theft has become a major problem worldwide and is a significant headache for utility companies. It not only results in revenue loss but also disrupts the quality of electricity, increases generation costs, and raises overall electricity prices. Electricity or Energy theft detection (ETD) systems utilizing machine learning, particularly those employing neural networks, have high accuracy and have become popular in literature, achieving higher detection performance. Recent studies reveal that machine learning and deep learning models are vulnerable. Day by day, different attack techniques are coming up in different fields, including energy, financial, etc. As the use of machine learning for energy theft detection has grown, it has become important to explore its weaknesses. Research has shown that most of the ETD models are vulnerable to evasion attacks (EA). Its goal is to reduce electricity costs by deceiving the model into recognizing a fraudulent customer as legitimate.
In this paper, four different experiments are conducted in which we check the performance of Convolutional Neural Network and adaboost (CNN-Adaboost) ETD system. Then, we design an evasion attack to assess the model's performance under attack. The attack comprises two methods: the first is we originally propose a novel Adversarial Data Generation Method (ADGM), which is an algorithm designed to generate adversarial data, and the other is Fast Gradient Sign Method (FGSM). In the third scenario, test the attack success rate on different percentages of malicious consumers. Finally, the performance of CNN-Adaboost and other state-of-the-art methods is tested and compared using 10 % and 20 % adversarial data. Our proposed attack is validated with State Grid Corporation of China (SGCC) dataset.
ADGM and FGSM attack models generate adversarial evasion attack samples by modifying the benign sample along with already available malicious data. These samples are transferred to the surrogate model in order to test how efficiently it works on malicious data, and we forward only those data that successfully deceive the surrogate model. The CNN_Adaboost ETD model's overall performance significantly decreased for both methods. The accuracy reduced up to 53.61 % from 96.3 % for ADGM and 63.42 % for FGSM and the transferability rates are 95.82 % and 90.68 % for ADGM and FGSM, respectively. Our findings reveal that the attack success rate (ASR) of ADGM is 94.11 % which is better than FGSM. It is also observed that as the percentage of adversarial data increased, the accuracy of the models decreased. The accuracy of CNN-Adaboost, initially 96.3 %, decreased to 85.45 % and 79.43 % for 10 % and 20 % adversarial data, respectively. These adversaries are transferable and are useful for designing robust and secure machine learning (ML) models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.10
自引率
0.00%
发文量
0
期刊最新文献
Exponential function LMS and fractional order pid based voltage power quality enhancement in distribution network A new discrete GaN-based dv/dt control circuit for megahertz frequency power converters Anomaly detection of adversarial cyber attacks on electric vehicle charging stations Assessment methodology for the resilience of energy systems in positive energy buildings Transactive energy management for efficient scheduling and storage utilization in a grid-connected renewable energy-based microgrid
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1