{"title":"Adversarial Attacks and Defenses in Multivariate Time-Series Forecasting for Smart and Connected Infrastructures","authors":"Pooja Krishan, Rohan Mohapatra, Saptarshi Sengupta","doi":"arxiv-2408.14875","DOIUrl":null,"url":null,"abstract":"The emergence of deep learning models has revolutionized various industries\nover the last decade, leading to a surge in connected devices and\ninfrastructures. However, these models can be tricked into making incorrect\npredictions with high confidence, leading to disastrous failures and security\nconcerns. To this end, we explore the impact of adversarial attacks on\nmultivariate time-series forecasting and investigate methods to counter them.\nSpecifically, we employ untargeted white-box attacks, namely the Fast Gradient\nSign Method (FGSM) and the Basic Iterative Method (BIM), to poison the inputs\nto the training process, effectively misleading the model. We also illustrate\nthe subtle modifications to the inputs after the attack, which makes detecting\nthe attack using the naked eye quite difficult. Having demonstrated the\nfeasibility of these attacks, we develop robust models through adversarial\ntraining and model hardening. We are among the first to showcase the\ntransferability of these attacks and defenses by extrapolating our work from\nthe benchmark electricity data to a larger, 10-year real-world data used for\npredicting the time-to-failure of hard disks. Our experimental results confirm\nthat the attacks and defenses achieve the desired security thresholds, leading\nto a 72.41% and 94.81% decrease in RMSE for the electricity and hard disk\ndatasets respectively after implementing the adversarial defenses.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"43 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.14875","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The emergence of deep learning models has revolutionized various industries
over the last decade, leading to a surge in connected devices and
infrastructures. However, these models can be tricked into making incorrect
predictions with high confidence, leading to disastrous failures and security
concerns. To this end, we explore the impact of adversarial attacks on
multivariate time-series forecasting and investigate methods to counter them.
Specifically, we employ untargeted white-box attacks, namely the Fast Gradient
Sign Method (FGSM) and the Basic Iterative Method (BIM), to poison the inputs
to the training process, effectively misleading the model. We also illustrate
the subtle modifications to the inputs after the attack, which makes detecting
the attack using the naked eye quite difficult. Having demonstrated the
feasibility of these attacks, we develop robust models through adversarial
training and model hardening. We are among the first to showcase the
transferability of these attacks and defenses by extrapolating our work from
the benchmark electricity data to a larger, 10-year real-world data used for
predicting the time-to-failure of hard disks. Our experimental results confirm
that the attacks and defenses achieve the desired security thresholds, leading
to a 72.41% and 94.81% decrease in RMSE for the electricity and hard disk
datasets respectively after implementing the adversarial defenses.