{"title":"The Analysis of the Generator Architectures and Loss Functions in Improving the Stability of GANs Training towards Efficient Intrusion Detection","authors":"Raha Soleymanzadeh, R. Kashef","doi":"10.1109/ISCMI56532.2022.10068468","DOIUrl":null,"url":null,"abstract":"Various research studies have been recently introduced in developing generative models, especially in computer vision and image classification. These models are inspired by a generator and discriminator network architecture in a min-max optimization game called Generative Adversarial Networks (GANs). However, GANs-based models suffer from training instability, which means high oscillations during the training, which provides inaccurate results. There are various causes beyond the instability behaviours, such as the adopted generator architecture, loss function, and distance metrics. In this paper, we focus on the impact of the generator architectures and the loss functions on the GANs training. We aim to provide a comparative assessment of various architectures focusing on ensemble and hybrid models and loss functions such as Focal loss, Binary Cross-Entropy and Mean Squared loss function. Experimental results on NSL-KDD and UNSW-NB15 datasets show that the ensemble models are more stable in terms of training and have higher intrusion detection rates. Additionally, the focal loss can improve the performance of detection minority classes. Using Mean squared loss improved the detection rate for discriminator, however with the Binary Cross entropy loss function, the deep features representation is improved and there is more stability in trends for all architectures.","PeriodicalId":340397,"journal":{"name":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","volume":"162 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCMI56532.2022.10068468","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Various research studies have been recently introduced in developing generative models, especially in computer vision and image classification. These models are inspired by a generator and discriminator network architecture in a min-max optimization game called Generative Adversarial Networks (GANs). However, GANs-based models suffer from training instability, which means high oscillations during the training, which provides inaccurate results. There are various causes beyond the instability behaviours, such as the adopted generator architecture, loss function, and distance metrics. In this paper, we focus on the impact of the generator architectures and the loss functions on the GANs training. We aim to provide a comparative assessment of various architectures focusing on ensemble and hybrid models and loss functions such as Focal loss, Binary Cross-Entropy and Mean Squared loss function. Experimental results on NSL-KDD and UNSW-NB15 datasets show that the ensemble models are more stable in terms of training and have higher intrusion detection rates. Additionally, the focal loss can improve the performance of detection minority classes. Using Mean squared loss improved the detection rate for discriminator, however with the Binary Cross entropy loss function, the deep features representation is improved and there is more stability in trends for all architectures.