{"title":"25th IEEE International Conference on Computational Science and Engineering, CSE 2022, Wuhan, China, December 9-11, 2022","authors":"","doi":"10.1109/CSE57773.2022","DOIUrl":"https://doi.org/10.1109/CSE57773.2022","url":null,"abstract":"","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91397089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/CSE53436.2021.00031
Pucha Rohan, Priyanka Singh, M. Mohanty
Photo Response Non-Uniformity (PRNU) has been used reliably in the field of digital forensics to identify the camera for multiple applications. Given its importance, we study how the different environmental conditions affect this unique camera property in this paper. We collected 18 different cameras and created a dataset by clicking photos of 10 different objects in the varied environmental conditions. To be specific, we clicked photos of objects in the sun, putting water droplets on the camera lens, placing objects inside water and, putting dust on the camera lens, apart from clicking the normal images of the objects in a closed room. To compute the PRNU of each of these 18 cameras, we clicked nearly 30 to 50 images of the plain surfaces. We then analyzed the behavior of these cameras, considering the computed PRNU as the baseline. We used the Peak to Correlation Energy (PCE) to evaluate a match for the camera. Here, we present the experimental results and the possible causes of failure for the PRNU for the specific cameras in varied environmental conditions.
{"title":"Effect of Environmental Conditions on PRNU","authors":"Pucha Rohan, Priyanka Singh, M. Mohanty","doi":"10.1109/CSE53436.2021.00031","DOIUrl":"https://doi.org/10.1109/CSE53436.2021.00031","url":null,"abstract":"Photo Response Non-Uniformity (PRNU) has been used reliably in the field of digital forensics to identify the camera for multiple applications. Given its importance, we study how the different environmental conditions affect this unique camera property in this paper. We collected 18 different cameras and created a dataset by clicking photos of 10 different objects in the varied environmental conditions. To be specific, we clicked photos of objects in the sun, putting water droplets on the camera lens, placing objects inside water and, putting dust on the camera lens, apart from clicking the normal images of the objects in a closed room. To compute the PRNU of each of these 18 cameras, we clicked nearly 30 to 50 images of the plain surfaces. We then analyzed the behavior of these cameras, considering the computed PRNU as the baseline. We used the Peak to Correlation Energy (PCE) to evaluate a match for the camera. Here, we present the experimental results and the possible causes of failure for the PRNU for the specific cameras in varied environmental conditions.","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"49 1","pages":"154-161"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75271465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/CSE53436.2021.00030
Tianyi Wang, Shengzhi Qin, Kam-pui Chow
The wake of increasing malicious cyberattack cases has aroused people’s attention on cybersecurity and vulnerabilities. Common Vulnerabilities and Exposures (CVE), a famous cybersecurity vulnerability database, is often referenced as a standard in cybersecurity territory for both research and commercial purposes. In the past decade, the development of Common Weakness Enumeration (CWE) has provided useful vulnerability taxonomy on CVE entities. However, the generation process of CWE categories is totally by manual working, which has made cybersecurity professionals suffer from the unpredictable timing waiting for the up to date information to be published. In this study, a new CWE based vulnerability types classification method is introduced with the adoption of the CVE dataset. Our method adopts transformer encoder-decoder architecture and uses pure self-attention mechanism without any convolutions and recurrences. We first encode the CVE input entries to learn representative features and then decode them to perform vulnerability types classification regarding the CWE standards. Fine-tuned deep pre-trained Bidirectional Encoder Representation from Transformers (BERT) is utilized in experiment and performs automatic vulnerability types classification tasks on unlabeled CVE candidates and assigns CWE IDs. The proposed vulnerability types classification method outperforms all classical Natural Language Processing (NLP) baseline algorithms, conducting a high accuracy of 90.74% on the testing dataset. In addition, the well-trained vulnerability types classification model is believed to achieve considerable correctness at industry level when applied to the real-life cyber threat intelligence related articles and reports.
随着恶意网络攻击案件的不断增多,引起了人们对网络安全和漏洞的关注。Common Vulnerabilities and Exposures (CVE)是一个著名的网络安全漏洞数据库,在网络安全研究和商业领域经常被引用为标准。在过去的十年中,公共弱点枚举(CWE)的发展为CVE实体提供了有用的漏洞分类。然而,CWE类别的生成过程完全是手工操作的,这使得网络安全专业人员面临着等待最新信息发布的不可预测的时间。本研究采用CVE数据集,引入了一种新的基于CWE的漏洞类型分类方法。该方法采用变压器式编解码器结构,采用纯自关注机制,不需要任何卷积和递归。我们首先对CVE输入条目进行编码,学习具有代表性的特征,然后对其进行解码,根据CWE标准进行漏洞类型分类。实验中使用了微调深度预训练双向编码器表示(BERT),对未标记的CVE候选对象进行漏洞类型自动分类任务并分配CWE id。提出的漏洞类型分类方法优于所有经典的自然语言处理(NLP)基线算法,在测试数据集上的准确率高达90.74%。此外,训练有素的漏洞类型分类模型在应用于现实生活中的网络威胁情报相关文章和报告时,在行业层面上具有相当的正确性。
{"title":"Towards Vulnerability Types Classification Using Pure Self-Attention: A Common Weakness Enumeration Based Approach","authors":"Tianyi Wang, Shengzhi Qin, Kam-pui Chow","doi":"10.1109/CSE53436.2021.00030","DOIUrl":"https://doi.org/10.1109/CSE53436.2021.00030","url":null,"abstract":"The wake of increasing malicious cyberattack cases has aroused people’s attention on cybersecurity and vulnerabilities. Common Vulnerabilities and Exposures (CVE), a famous cybersecurity vulnerability database, is often referenced as a standard in cybersecurity territory for both research and commercial purposes. In the past decade, the development of Common Weakness Enumeration (CWE) has provided useful vulnerability taxonomy on CVE entities. However, the generation process of CWE categories is totally by manual working, which has made cybersecurity professionals suffer from the unpredictable timing waiting for the up to date information to be published. In this study, a new CWE based vulnerability types classification method is introduced with the adoption of the CVE dataset. Our method adopts transformer encoder-decoder architecture and uses pure self-attention mechanism without any convolutions and recurrences. We first encode the CVE input entries to learn representative features and then decode them to perform vulnerability types classification regarding the CWE standards. Fine-tuned deep pre-trained Bidirectional Encoder Representation from Transformers (BERT) is utilized in experiment and performs automatic vulnerability types classification tasks on unlabeled CVE candidates and assigns CWE IDs. The proposed vulnerability types classification method outperforms all classical Natural Language Processing (NLP) baseline algorithms, conducting a high accuracy of 90.74% on the testing dataset. In addition, the well-trained vulnerability types classification model is believed to achieve considerable correctness at industry level when applied to the real-life cyber threat intelligence related articles and reports.","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"8 1","pages":"146-153"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84849491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/CSE53436.2021.00016
Haotian Miao, Yifei Zhang, Daling Wang, Shi Feng
Many real-world applications could profit from the ability of image aesthetic analysis. A simultaneous understanding of both the visual content of images and the textual content of user comments and style attributes appears to be more vivid and adequate than single-modality and single-dimension information to help people learning to identify beauty or not. In this paper, we propose a multimodal co-transformer model to learn a joint representation of multimodal contents based on the co-attention mechanism, and then we conduct multi-dimension aesthetic analysis assisted by style attributes. Towards this goal, we propose a stacked multimodal co-transformer module encoding the feature under interactive guidance, and then we utilize a multi-task learning strategy for predicting multiple aesthetic dimensions. Experimental results indicate that the proposed model achieves state-of-the-art performance on the AVA datasets benchmark.
{"title":"Multimodal Aesthetic Analysis Assisted by Styles through a Multimodal co-Transformer Model","authors":"Haotian Miao, Yifei Zhang, Daling Wang, Shi Feng","doi":"10.1109/CSE53436.2021.00016","DOIUrl":"https://doi.org/10.1109/CSE53436.2021.00016","url":null,"abstract":"Many real-world applications could profit from the ability of image aesthetic analysis. A simultaneous understanding of both the visual content of images and the textual content of user comments and style attributes appears to be more vivid and adequate than single-modality and single-dimension information to help people learning to identify beauty or not. In this paper, we propose a multimodal co-transformer model to learn a joint representation of multimodal contents based on the co-attention mechanism, and then we conduct multi-dimension aesthetic analysis assisted by style attributes. Towards this goal, we propose a stacked multimodal co-transformer module encoding the feature under interactive guidance, and then we utilize a multi-task learning strategy for predicting multiple aesthetic dimensions. Experimental results indicate that the proposed model achieves state-of-the-art performance on the AVA datasets benchmark.","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"52 1","pages":"43-50"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90682789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/CSE53436.2021.00015
Xiaolin Zhou, Xiaojie Liu, Xingwei Wang, Shiguang Wu, Mingyang Sun
The multi-robot coverage path planning (CPP) is the design of optimal motion sequence of robots, which can make robots execute the task covering all positions of the work area except the obstacles. In this article, the communication capability of the multi-robot system is applied, and a multi-robot CPP mechanism is proposed to control the robots to perform CPP tasks in an unknown environment. In this mechanism, an algorithm based on deep reinforcement learning is proposed, which can generate the next action for robots in real-time according to the current state of the robots. In addition, a real-time obstacle avoidance scheme for multi-robot is proposed based on the information interaction capability of multi-robot. Experiment results show that the method can plan the optimal path for multi-robot to complete the covering task in an unknown environment. Moreover, compared with other reinforcement learning methods, the algorithm proposed can efficiently learning with fast convergence speed and good stability.
{"title":"Multi-Robot Coverage Path Planning based on Deep Reinforcement Learning","authors":"Xiaolin Zhou, Xiaojie Liu, Xingwei Wang, Shiguang Wu, Mingyang Sun","doi":"10.1109/CSE53436.2021.00015","DOIUrl":"https://doi.org/10.1109/CSE53436.2021.00015","url":null,"abstract":"The multi-robot coverage path planning (CPP) is the design of optimal motion sequence of robots, which can make robots execute the task covering all positions of the work area except the obstacles. In this article, the communication capability of the multi-robot system is applied, and a multi-robot CPP mechanism is proposed to control the robots to perform CPP tasks in an unknown environment. In this mechanism, an algorithm based on deep reinforcement learning is proposed, which can generate the next action for robots in real-time according to the current state of the robots. In addition, a real-time obstacle avoidance scheme for multi-robot is proposed based on the information interaction capability of multi-robot. Experiment results show that the method can plan the optimal path for multi-robot to complete the covering task in an unknown environment. Moreover, compared with other reinforcement learning methods, the algorithm proposed can efficiently learning with fast convergence speed and good stability.","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"10 1","pages":"35-42"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84315901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human identification based on gait biometrics has become a popular research topic of computer vision and pattern recognition due to its great potential in public security and surveillance system. However, the recognition accuracy can be seriously degraded because of the appearance differences caused by view angle variation. To tackle this problem, we propose a method based on convolutional neural network (CNN) and attention mechanism to solve the cross-view problem in gait recognition. In the proposed algorithm, we firstly extract the features based on CNN structure and then the Horizontal Splitting operation is done to obtain the feature partitions in different granularities. After that, the attention mechanism is utilized to calculate the attention scores of the input partitions on both spatial and channel domain and finally the group of feature vectors can be obtained to determine the corresponding identity. In order to verify the effectiveness of the proposed method, the experiments are done based on two popular gait datasets–CASIA-B and OU-ISIR LP. The results show that the proposed model can effectively extract the discriminative gait features robust to view angle variation and improve the crossview gait recognition accuracy compared with the state-of-the-arts.
{"title":"Extracting Discriminative Features for Cross-View Gait Recognition Based on the Attention Mechanism","authors":"Ruicheng Sun, Shuo Han, Weihang Peng, Hanxiang Zhuang, Xin Zeng, Xingang Liu","doi":"10.1109/CSE53436.2021.00032","DOIUrl":"https://doi.org/10.1109/CSE53436.2021.00032","url":null,"abstract":"Human identification based on gait biometrics has become a popular research topic of computer vision and pattern recognition due to its great potential in public security and surveillance system. However, the recognition accuracy can be seriously degraded because of the appearance differences caused by view angle variation. To tackle this problem, we propose a method based on convolutional neural network (CNN) and attention mechanism to solve the cross-view problem in gait recognition. In the proposed algorithm, we firstly extract the features based on CNN structure and then the Horizontal Splitting operation is done to obtain the feature partitions in different granularities. After that, the attention mechanism is utilized to calculate the attention scores of the input partitions on both spatial and channel domain and finally the group of feature vectors can be obtained to determine the corresponding identity. In order to verify the effectiveness of the proposed method, the experiments are done based on two popular gait datasets–CASIA-B and OU-ISIR LP. The results show that the proposed model can effectively extract the discriminative gait features robust to view angle variation and improve the crossview gait recognition accuracy compared with the state-of-the-arts.","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"46 1","pages":"162-167"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88312872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/CSE53436.2021.00014
Zhenzhou Guo, Ding Feng, Changqing Gong, Han Qi, Na Lin, Xintong Li
The chaos theory is a widely used technology for image encryption as its significant properties such as unpredictability and initial state sensitivity. In this paper, we introduce a new 3D Sine adjusted Logistic hyperchaotic system (3D-SALM), which is derived from the Logistic and Sine maps. Performance evaluation shows that it has good performance in ergodicity and orbit uncertainty. To investigate its applications, we propose a new random DNA coding scheme, the random coding rules are according to 3D-SALM sequences. This paper further introduces a new image encryption scheme (SALM-IES). In order to enhance the confusion of cipher-image, the principle of random diffusion and random confusion are fulfilled. Simulation results show that the proposed scheme can effectively resist various typical attacks, especially in the resistance to differential and cropping attacks.
{"title":"A new image encryption scheme based on 3D Sine-adjusted-Logistic map and DNA coding","authors":"Zhenzhou Guo, Ding Feng, Changqing Gong, Han Qi, Na Lin, Xintong Li","doi":"10.1109/CSE53436.2021.00014","DOIUrl":"https://doi.org/10.1109/CSE53436.2021.00014","url":null,"abstract":"The chaos theory is a widely used technology for image encryption as its significant properties such as unpredictability and initial state sensitivity. In this paper, we introduce a new 3D Sine adjusted Logistic hyperchaotic system (3D-SALM), which is derived from the Logistic and Sine maps. Performance evaluation shows that it has good performance in ergodicity and orbit uncertainty. To investigate its applications, we propose a new random DNA coding scheme, the random coding rules are according to 3D-SALM sequences. This paper further introduces a new image encryption scheme (SALM-IES). In order to enhance the confusion of cipher-image, the principle of random diffusion and random confusion are fulfilled. Simulation results show that the proposed scheme can effectively resist various typical attacks, especially in the resistance to differential and cropping attacks.","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"56 1","pages":"27-34"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85561842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/CSE53436.2021.00011
Chulan Ren, Ning Wang, Yang Zhang
The hippocampus segmentation in MRI is of great significance for the diagnosis, treatment decision and research of neuropsychiatric diseases. Manual segmentation of the hippocampus is very time-consuming and has low repeatability. With the development of deep learning, great progress has been brought about in this regard. In this paper, the U-net model is selected to realize the automatic segmentation of the hippocampus, and the residual module is added to the U-net segmentation network to speed up the network convergence. Aiming at the characteristics of the hippocampus in the brain MRI image such as blurry edges, irregular shapes, and small size, the Laplacian algorithm is used to sharpen and filter the original image to make the details and edges of the brain image clearer. The enhanced picture can effectively improve the segmentation effect. Finally, the Dice coefficient on the test set reached 90.14%.The experimental results show that the pre-processed images use this segmentation model to achieve accurate segmentation of the hippocampus in the brain MRI, which can assist doctors in better diagnosis.
{"title":"Human Brain Hippocampus Segmentation Based on Improved U-net Model","authors":"Chulan Ren, Ning Wang, Yang Zhang","doi":"10.1109/CSE53436.2021.00011","DOIUrl":"https://doi.org/10.1109/CSE53436.2021.00011","url":null,"abstract":"The hippocampus segmentation in MRI is of great significance for the diagnosis, treatment decision and research of neuropsychiatric diseases. Manual segmentation of the hippocampus is very time-consuming and has low repeatability. With the development of deep learning, great progress has been brought about in this regard. In this paper, the U-net model is selected to realize the automatic segmentation of the hippocampus, and the residual module is added to the U-net segmentation network to speed up the network convergence. Aiming at the characteristics of the hippocampus in the brain MRI image such as blurry edges, irregular shapes, and small size, the Laplacian algorithm is used to sharpen and filter the original image to make the details and edges of the brain image clearer. The enhanced picture can effectively improve the segmentation effect. Finally, the Dice coefficient on the test set reached 90.14%.The experimental results show that the pre-processed images use this segmentation model to achieve accurate segmentation of the hippocampus in the brain MRI, which can assist doctors in better diagnosis.","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"26 1","pages":"7-11"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91014366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/CSE53436.2021.00020
Zhaoyang Du, Ganggui Wang, Narisu Cha, Celimuge Wu, T. Yoshinaga, Rui Yin
While vehicular federated learning (FL) systems can be used for various purposes including traffic monitoring and people flow control, since the learning process involves a large variety of network entities that exhibits different characteristics, it is inefficient to establish an end-to-end communication route for each model upload/download. In this paper, we discuss the use of delay tolerant networking (DTN) technology in transmission of FL models for unmanned aerial vehicle (UAV) empowered vehicular environments, and propose a networking scheme. The proposed scheme considers the encounter probability, the connectivity between encounter nodes, and the sociability of nodes in the packet forwarding by using a fuzzy logic approach. The importance of local model data is also considered in the buffer management of forwarder nodes, which ensures that local models with higher importance are more likely to be delivered to the central server. We use extensive simulations to evaluate the proposed scheme in terms of its effect on the federated learning, packet delivery ratio, networking overhead and communication latency by comparing with existing baselines.
{"title":"UAV-empowered Vehicular Networking Scheme for Federated Learning in Delay Tolerant Environments","authors":"Zhaoyang Du, Ganggui Wang, Narisu Cha, Celimuge Wu, T. Yoshinaga, Rui Yin","doi":"10.1109/CSE53436.2021.00020","DOIUrl":"https://doi.org/10.1109/CSE53436.2021.00020","url":null,"abstract":"While vehicular federated learning (FL) systems can be used for various purposes including traffic monitoring and people flow control, since the learning process involves a large variety of network entities that exhibits different characteristics, it is inefficient to establish an end-to-end communication route for each model upload/download. In this paper, we discuss the use of delay tolerant networking (DTN) technology in transmission of FL models for unmanned aerial vehicle (UAV) empowered vehicular environments, and propose a networking scheme. The proposed scheme considers the encounter probability, the connectivity between encounter nodes, and the sociability of nodes in the packet forwarding by using a fuzzy logic approach. The importance of local model data is also considered in the buffer management of forwarder nodes, which ensures that local models with higher importance are more likely to be delivered to the central server. We use extensive simulations to evaluate the proposed scheme in terms of its effect on the federated learning, packet delivery ratio, networking overhead and communication latency by comparing with existing baselines.","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"7 1","pages":"72-79"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73027522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.1109/CSE53436.2021.00033
Liam Daly Manocchio, S. Layeghy, Marius Portmann
Generative Adversarial Networks (GANs) are known to be a powerful machine learning tool for realistic data synthesis. In this paper, we explore GANs for the generation of synthetic network flow data (NetFlow), e.g. for the training of Network Intrusion Detection Systems. GANs are known to be prone to modal collapse, a condition where the generated data fails to reflect the diversity (modes) of the training data. We experimentally evaluate the key GAN-based approaches in the literature for the synthetic generation of network flow data, and demonstrate that they indeed suffer from modal collapse. To address this problem, we present FlowGAN, a network flow generation method which mitigates the problem of modal collapse by applying the recently proposed concept of Manifold Guided Generative Adversarial Networks (MGGAN). Our experimental evaluation shows that FlowGAN is able to generate much more realistic network traffic flows compared to the state-of-the-art GAN-based approaches. We quantify this significant improvement of FlowGAN by using the Wasserstein distance between the statistical distribution of key features of the generated flow data, compared with the corresponding distributions in the training data set.
{"title":"FlowGAN - Synthetic Network Flow Generation using Generative Adversarial Networks","authors":"Liam Daly Manocchio, S. Layeghy, Marius Portmann","doi":"10.1109/CSE53436.2021.00033","DOIUrl":"https://doi.org/10.1109/CSE53436.2021.00033","url":null,"abstract":"Generative Adversarial Networks (GANs) are known to be a powerful machine learning tool for realistic data synthesis. In this paper, we explore GANs for the generation of synthetic network flow data (NetFlow), e.g. for the training of Network Intrusion Detection Systems. GANs are known to be prone to modal collapse, a condition where the generated data fails to reflect the diversity (modes) of the training data. We experimentally evaluate the key GAN-based approaches in the literature for the synthetic generation of network flow data, and demonstrate that they indeed suffer from modal collapse. To address this problem, we present FlowGAN, a network flow generation method which mitigates the problem of modal collapse by applying the recently proposed concept of Manifold Guided Generative Adversarial Networks (MGGAN). Our experimental evaluation shows that FlowGAN is able to generate much more realistic network traffic flows compared to the state-of-the-art GAN-based approaches. We quantify this significant improvement of FlowGAN by using the Wasserstein distance between the statistical distribution of key features of the generated flow data, compared with the corresponding distributions in the training data set.","PeriodicalId":6838,"journal":{"name":"2021 IEEE 24th International Conference on Computational Science and Engineering (CSE)","volume":"39 1","pages":"168-176"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76160709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}