{"title":"Towards Adversarially Robust DDoS-Attack Classification","authors":"Michael Guarino, Pablo Rivas, C. DeCusatis","doi":"10.1109/UEMCON51285.2020.9298167","DOIUrl":null,"url":null,"abstract":"On the frontier of cybersecurity are a class of emergent security threats that learn to find vulnerabilities in machine learning systems. A supervised machine learning classifier learns a mapping from x to y where x is the input features and y is a vector of associated labels. Neural Networks are state of the art performers on most vision, audio, and natural language processing tasks. Neural Networks have been shown to be vulnerable to adversarial perturbations of the input, which cause them to misclassify with high confidence. Adversarial perturbations are small but targeted modifications to the input often undetectable by the human eye. Adversarial perturbations pose risk to applications that rely on machine learning models. Neural Networks have been shown to be able to classify distributed denial of service (DDoS) attacks by learning a dataset of attack characteristics visualized using three-axis hive plots. In this work we present a novel application of a classifier trained to classify DDoS attacks that is robust to some of the most common, known, classes of gradient-based and gradient-free adversarial attacks.","PeriodicalId":433609,"journal":{"name":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UEMCON51285.2020.9298167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
On the frontier of cybersecurity are a class of emergent security threats that learn to find vulnerabilities in machine learning systems. A supervised machine learning classifier learns a mapping from x to y where x is the input features and y is a vector of associated labels. Neural Networks are state of the art performers on most vision, audio, and natural language processing tasks. Neural Networks have been shown to be vulnerable to adversarial perturbations of the input, which cause them to misclassify with high confidence. Adversarial perturbations are small but targeted modifications to the input often undetectable by the human eye. Adversarial perturbations pose risk to applications that rely on machine learning models. Neural Networks have been shown to be able to classify distributed denial of service (DDoS) attacks by learning a dataset of attack characteristics visualized using three-axis hive plots. In this work we present a novel application of a classifier trained to classify DDoS attacks that is robust to some of the most common, known, classes of gradient-based and gradient-free adversarial attacks.