The improvement of modern communication technology has made the Internet of Vehicles (IoV) advance by leaps and bounds, and promotes the progress of many technologies, such as mobile sensing, vehicular edge computing, sensor networks, satellite positioning, data analysis, etc. Vehicular edge computing (VEC) is an innovative computing paradigm which can provide flexible and reliable computation services for intelligent and connected vehicles. An optimized problem is formulated to minimize the total task offloading time delay by making a tradeoff between vehicle mobility and task nature. To tackle the optimization problem, we proposed the Delay-sensitive half-Determined atomic Search algorithm, called DeshDaS, in which we regard each intelligent vehicle as an atom and strategy as electron and consider electron transition process. Experimental results validate the effectiveness and superior of our algorithm compared with several existed offloading strategy, and the larger average amount of data waiting to be processed, the more significant our advantage is.
{"title":"Dynamic resource allocation on Vehicular edge computing and communication","authors":"Senyu Yu, Yan Guo, Ning Li, Duan Xue, Cuntao Liu","doi":"10.1145/3571662.3571696","DOIUrl":"https://doi.org/10.1145/3571662.3571696","url":null,"abstract":"The improvement of modern communication technology has made the Internet of Vehicles (IoV) advance by leaps and bounds, and promotes the progress of many technologies, such as mobile sensing, vehicular edge computing, sensor networks, satellite positioning, data analysis, etc. Vehicular edge computing (VEC) is an innovative computing paradigm which can provide flexible and reliable computation services for intelligent and connected vehicles. An optimized problem is formulated to minimize the total task offloading time delay by making a tradeoff between vehicle mobility and task nature. To tackle the optimization problem, we proposed the Delay-sensitive half-Determined atomic Search algorithm, called DeshDaS, in which we regard each intelligent vehicle as an atom and strategy as electron and consider electron transition process. Experimental results validate the effectiveness and superior of our algorithm compared with several existed offloading strategy, and the larger average amount of data waiting to be processed, the more significant our advantage is.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122148237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Usage control (UCON) model realizes the usage control of resources by integrating authorization, obligations and conditions and providing characteristics of decision continuity and attribute mutability. In order to better adapt to the data interaction demand in the industrial Internet environment, the enhanced UCON(EN-UCON) model is proposed to extend the UCON model to maintain the persistent control of obligations in the lifecycle of resources usage. Firstly, the continuous monitoring of obligations is implemented through the post obligation model. And then, the performance of the obligation is recorded through the trust level, which will be incorporated into the subsequent authorization strategy as an important factor. Finally, the application of EN-UCON model in the industrial Internet interaction scenario is described through a specific case.
{"title":"The Enhanced Usage Control for data sharing in Industrial Internet","authors":"Zhong Na, Kai Li, Wei Liu, Zhifeng Gao","doi":"10.1145/3571662.3571689","DOIUrl":"https://doi.org/10.1145/3571662.3571689","url":null,"abstract":"Usage control (UCON) model realizes the usage control of resources by integrating authorization, obligations and conditions and providing characteristics of decision continuity and attribute mutability. In order to better adapt to the data interaction demand in the industrial Internet environment, the enhanced UCON(EN-UCON) model is proposed to extend the UCON model to maintain the persistent control of obligations in the lifecycle of resources usage. Firstly, the continuous monitoring of obligations is implemented through the post obligation model. And then, the performance of the obligation is recorded through the trust level, which will be incorporated into the subsequent authorization strategy as an important factor. Finally, the application of EN-UCON model in the industrial Internet interaction scenario is described through a specific case.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124925138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The diffusion function with large branch number is a fundamental building block in the construction of many block ciphers to achieve provable bounds against differential and linear cryptanalysis. Conventional diffusion functions, which are constructed based on linear error-correction code, has the undesirable side effect that a linear diffusion function by itself is “transparent” (i.e., has transition probability of 1) to differential and linear cryptanalysis. Nonlinear diffusion functions are less studied in cryptographic literature, up to now. In this paper, we propose a practical criterion for nonlinear optimal diffusion functions. Using this criterion we construct generally a class of nonlinear optimal diffusion functions over finite field. Unlike the previous constructions, our functions are non-linear, and thus they can provide enhanced protection against differential and linear cryptanalysis.
{"title":"Construction of Nonlinear Optimal Diffusion Functions over Finite Fields","authors":"B. Shen, Yu Zhou","doi":"10.1145/3571662.3571679","DOIUrl":"https://doi.org/10.1145/3571662.3571679","url":null,"abstract":"The diffusion function with large branch number is a fundamental building block in the construction of many block ciphers to achieve provable bounds against differential and linear cryptanalysis. Conventional diffusion functions, which are constructed based on linear error-correction code, has the undesirable side effect that a linear diffusion function by itself is “transparent” (i.e., has transition probability of 1) to differential and linear cryptanalysis. Nonlinear diffusion functions are less studied in cryptographic literature, up to now. In this paper, we propose a practical criterion for nonlinear optimal diffusion functions. Using this criterion we construct generally a class of nonlinear optimal diffusion functions over finite field. Unlike the previous constructions, our functions are non-linear, and thus they can provide enhanced protection against differential and linear cryptanalysis.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125953023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lingling Tu, Gaoyan Cai, Bingji Liang, Weining Mao
Aiming at the problem that the load identification accuracy of non-intrusive load monitoring (NILM) is greatly affected by the power of loads and the number of background loads, a non-intrusive load identification method based on the current complex spectrum and support vector machine (SVM) is proposed. Through the high-frequency sampling of the load's voltage and current, the complex spectrum of the current is extracted by the fast Fourier transform (FFT), and the multi-class SVM load identification model is established and optimized to realize the non-intrusive load identification. The algorithm is verified using the PLAID datasets, and the load identification accuracy of the algorithm is compared with SVM classifiers based on total harmonic distortion rate (THD), harmonic component ratio and harmonic amplitude. The results of the experiments show that the proposed method not only improves the identification accuracy of low-power loads, but also has higher identification accuracy and better identification robustness of switching load in multi-load scenarios.
{"title":"Non-Intrusive Load Identification Based on Complex Spectrum and Support Vector Machine","authors":"Lingling Tu, Gaoyan Cai, Bingji Liang, Weining Mao","doi":"10.1145/3571662.3571665","DOIUrl":"https://doi.org/10.1145/3571662.3571665","url":null,"abstract":"Aiming at the problem that the load identification accuracy of non-intrusive load monitoring (NILM) is greatly affected by the power of loads and the number of background loads, a non-intrusive load identification method based on the current complex spectrum and support vector machine (SVM) is proposed. Through the high-frequency sampling of the load's voltage and current, the complex spectrum of the current is extracted by the fast Fourier transform (FFT), and the multi-class SVM load identification model is established and optimized to realize the non-intrusive load identification. The algorithm is verified using the PLAID datasets, and the load identification accuracy of the algorithm is compared with SVM classifiers based on total harmonic distortion rate (THD), harmonic component ratio and harmonic amplitude. The results of the experiments show that the proposed method not only improves the identification accuracy of low-power loads, but also has higher identification accuracy and better identification robustness of switching load in multi-load scenarios.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130551754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingkang Yuan, Ye Li, Jiaxi Sun, Baokun Shi, Jinzhong Xu, Lele Xu, Yisu Wang
Remote sensing images are usually distributed in different departments and contain private information, so they normally cannot be available publicly. However, it is a trend to jointly use remote sensing images from different departments, because it normally enables the model to capture more information and remote sensing image analysis based on deep learning generally requires lots of training data. To address the above problem, in this paper, we apply a distributed asynchronized discriminator GAN framework (DGAN) to jointly learn remote sensing images from different client nodes. The DGAN is composed of multiple distributed discriminators and a central generator, and only the synthetic remote sensing images generated by the DGAN are used to train a semantic segmentation model. Based on DGAN, we establish an experimental platform composed of multiple different hosts, which adopts socket and multi-process technology to realize asynchronous communication between hosts, and visualize the training and testing process. During DGAN training, instead of original remote sensing images or convolutional network model information, only synthetic images, losses and labeled images are exchanged between nodes. Therefore, the DGAN well protects the privacy and security of the original remote sensing images. We verify the performance of the DGAN on three remote sensing image datasets (City-OSM, WHU and Kaggle Ship). In the experiments, we take different distributions of remote sensing images in client nodes into consideration. The experiments show that the DGAN has a great capacity for distributed remote sensing image learning without sharing the original remote sensing images or the convolutional network model. Moreover, compared with a centralized GAN trained on all remote sensing images collected from all client nodes, the DGAN can achieve almost the same performance in semantic segmentation tasks for remote sensing images.
{"title":"Distributed Learning based on Asynchronized Discriminator GAN for remote sensing image segmentation","authors":"Mingkang Yuan, Ye Li, Jiaxi Sun, Baokun Shi, Jinzhong Xu, Lele Xu, Yisu Wang","doi":"10.1145/3571662.3571668","DOIUrl":"https://doi.org/10.1145/3571662.3571668","url":null,"abstract":"Remote sensing images are usually distributed in different departments and contain private information, so they normally cannot be available publicly. However, it is a trend to jointly use remote sensing images from different departments, because it normally enables the model to capture more information and remote sensing image analysis based on deep learning generally requires lots of training data. To address the above problem, in this paper, we apply a distributed asynchronized discriminator GAN framework (DGAN) to jointly learn remote sensing images from different client nodes. The DGAN is composed of multiple distributed discriminators and a central generator, and only the synthetic remote sensing images generated by the DGAN are used to train a semantic segmentation model. Based on DGAN, we establish an experimental platform composed of multiple different hosts, which adopts socket and multi-process technology to realize asynchronous communication between hosts, and visualize the training and testing process. During DGAN training, instead of original remote sensing images or convolutional network model information, only synthetic images, losses and labeled images are exchanged between nodes. Therefore, the DGAN well protects the privacy and security of the original remote sensing images. We verify the performance of the DGAN on three remote sensing image datasets (City-OSM, WHU and Kaggle Ship). In the experiments, we take different distributions of remote sensing images in client nodes into consideration. The experiments show that the DGAN has a great capacity for distributed remote sensing image learning without sharing the original remote sensing images or the convolutional network model. Moreover, compared with a centralized GAN trained on all remote sensing images collected from all client nodes, the DGAN can achieve almost the same performance in semantic segmentation tasks for remote sensing images.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117190787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RF stealth waveform design is an essential technology in RF stealth radar. LPI performance evaluation of waveforms becomes more and more critical. Several radars transmit waveforms are designed through compound modulation, and the relative entropy between the signal and Gaussian White Noise is used as an index to evaluate the LPI performance of the waveform. At the same time, two methods of ambiguity function and interception factor are used to compare and verify them. The final simulation realizes the quantitative evaluation of waveform RF stealth performance based on relative entropy.
{"title":"Evaluation of Waveform RF Stealth Performance Based on Relative Entropy","authors":"Min Zhao, Siyu Xu, Bing-Gang Sun","doi":"10.1145/3571662.3571685","DOIUrl":"https://doi.org/10.1145/3571662.3571685","url":null,"abstract":"RF stealth waveform design is an essential technology in RF stealth radar. LPI performance evaluation of waveforms becomes more and more critical. Several radars transmit waveforms are designed through compound modulation, and the relative entropy between the signal and Gaussian White Noise is used as an index to evaluate the LPI performance of the waveform. At the same time, two methods of ambiguity function and interception factor are used to compare and verify them. The final simulation realizes the quantitative evaluation of waveform RF stealth performance based on relative entropy.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131055161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The blockchain technology has developed rapidly in recent years and has been widely used in all walks of life. However, most of the authentication systems adopted by the current blockchain technology are public key infrastructure based on large integer decomposition or discrete logarithm difficulties, and these cryptosystems are not secure in the quantum environment. Therefore, this paper considers an identity based post quantum authentication system applicable to the blockchain, which provides anti quantum protection and eliminates the dependence on public key certificates. Under the control of the supervision node, the authentication system has the key revocation function.
{"title":"Post quantum identity authentication mechanism in blockchain","authors":"Peng Duan, Bo Zhou","doi":"10.1145/3571662.3571682","DOIUrl":"https://doi.org/10.1145/3571662.3571682","url":null,"abstract":"The blockchain technology has developed rapidly in recent years and has been widely used in all walks of life. However, most of the authentication systems adopted by the current blockchain technology are public key infrastructure based on large integer decomposition or discrete logarithm difficulties, and these cryptosystems are not secure in the quantum environment. Therefore, this paper considers an identity based post quantum authentication system applicable to the blockchain, which provides anti quantum protection and eliminates the dependence on public key certificates. Under the control of the supervision node, the authentication system has the key revocation function.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129736909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li-wei Guo, Xinglin Shen, Shanzhu Xiao, Huanzhang Lu
Under low signal-to-noise ratio (SNR) target tracking, poor target information and high clutter limit the tracking effect. Extended targets potentially generate more than one measurement per time step. Multiple extended targets tracking is therefore can be used to improve tracking performance with low SNR, due to the expanded data than point targets tracking. Based on the classical probability hypothesis density (PHD) filter, the extended target PHD (ET- PHD) filter is proposed to track multiple extended targets. The main contribution of this paper is the improvement of the classical extended target Gaussian-mixture probability hypothesis density (ET-GM-PHD) filter. A method based on the ET-GM-PHD filter is proposed for decreasing false alarms and improving measurement set partition performance under low SNR cases. The optimized method is shown a better tracking performance in estimation accuracy of the targets number and targets state in comparison with a point PHD filter.
{"title":"Optimization Tracking Algorithm Based on Extended Target Gaussian Mixture PHD Filter","authors":"Li-wei Guo, Xinglin Shen, Shanzhu Xiao, Huanzhang Lu","doi":"10.1145/3571662.3571687","DOIUrl":"https://doi.org/10.1145/3571662.3571687","url":null,"abstract":"Under low signal-to-noise ratio (SNR) target tracking, poor target information and high clutter limit the tracking effect. Extended targets potentially generate more than one measurement per time step. Multiple extended targets tracking is therefore can be used to improve tracking performance with low SNR, due to the expanded data than point targets tracking. Based on the classical probability hypothesis density (PHD) filter, the extended target PHD (ET- PHD) filter is proposed to track multiple extended targets. The main contribution of this paper is the improvement of the classical extended target Gaussian-mixture probability hypothesis density (ET-GM-PHD) filter. A method based on the ET-GM-PHD filter is proposed for decreasing false alarms and improving measurement set partition performance under low SNR cases. The optimized method is shown a better tracking performance in estimation accuracy of the targets number and targets state in comparison with a point PHD filter.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125881878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teng Wang, Fenglian Li, Xueying Zhang, Lixia Huang, Wenhui Jia
Stroke is an acute cerebrovascular disease with high mortality and disability. Computer-aided interventional diagnosis is a necessary measure to improve the efficiency of stroke diagnosis by using modern advanced medical instruments and machine learning methods. Electroencephalogram (EEG) as a diagnostic means, is a test that measures the electrical activity of the brain through electrodes attached to the scalp to find changes in brain activity. EEG detection has the advantages of low cost, simple and easy to implement, and no physical harm and psychological stress to patients. Studies have shown that EEG signal might be useful in diagnosing stroke. By using machine learning methods, EEG signals can be used to classify stroke patients and normal subjects, or subtypes. Stroke is generally divided into two types: ischemic stroke and hemorrhagic stroke. How to classify ischemic and hemorrhagic strokes based on stroke patients’ EEG data by constructing prediction model is the main purpose on this paper. In recent years, researchers have developed many technologies in the field of stroke classification prediction based on EEG signals, using a variety of machine learning methods to ensure the improvement of prediction accuracy. The typical methods usually extract the time domain, frequency domain or spatial domain features of EEG signals before establishing a stroke classification model. However, the quality of the extracted features cannot be guaranteed in stroke patient or subtype classification. In addition, EEG feature extraction is usually computationally expensive. The main goal of this paper is to propose a novel classification prediction model using an end-to-end deep neural network that avoids the process of manual feature extraction. This paper proposes a one-dimensional convolutional neural network (1D-CNN) classification model based on stroke EEG signal. The model includes four convolutional blocks, a global average pooling layer, a dropout layer, and a SoftMax layer. Each convolution block consists of two convolution layers and a pool layer for extracting features and reducing the number of parameters. A one-dimensional convolution kernel is used in order to match the characteristics of EEG one-dimensional time domain signal. The model can automatically extract the features of stroke EEG signal for classifying stroke by using convolutional layers. The EEG data of clinical stroke patients collected from the neurology department of a hospital are used in the experiments. Long Short-Term Memory (LSTM) model is also used as a benchmark to achieve end-to-end prediction for verifying the proposed model performance. The experimental results show that the proposed 1D-CNN prediction model has good prediction performance, with an accuracy of 90.53%, a precision of 87.90%, a sensitivity of 91.60%, and a specificity of 89.65%. It is much higher than the prediction result of LSTM model.
{"title":"A 1D-CNN prediction model for stroke classification based on EEG signal","authors":"Teng Wang, Fenglian Li, Xueying Zhang, Lixia Huang, Wenhui Jia","doi":"10.1145/3571662.3571695","DOIUrl":"https://doi.org/10.1145/3571662.3571695","url":null,"abstract":"Stroke is an acute cerebrovascular disease with high mortality and disability. Computer-aided interventional diagnosis is a necessary measure to improve the efficiency of stroke diagnosis by using modern advanced medical instruments and machine learning methods. Electroencephalogram (EEG) as a diagnostic means, is a test that measures the electrical activity of the brain through electrodes attached to the scalp to find changes in brain activity. EEG detection has the advantages of low cost, simple and easy to implement, and no physical harm and psychological stress to patients. Studies have shown that EEG signal might be useful in diagnosing stroke. By using machine learning methods, EEG signals can be used to classify stroke patients and normal subjects, or subtypes. Stroke is generally divided into two types: ischemic stroke and hemorrhagic stroke. How to classify ischemic and hemorrhagic strokes based on stroke patients’ EEG data by constructing prediction model is the main purpose on this paper. In recent years, researchers have developed many technologies in the field of stroke classification prediction based on EEG signals, using a variety of machine learning methods to ensure the improvement of prediction accuracy. The typical methods usually extract the time domain, frequency domain or spatial domain features of EEG signals before establishing a stroke classification model. However, the quality of the extracted features cannot be guaranteed in stroke patient or subtype classification. In addition, EEG feature extraction is usually computationally expensive. The main goal of this paper is to propose a novel classification prediction model using an end-to-end deep neural network that avoids the process of manual feature extraction. This paper proposes a one-dimensional convolutional neural network (1D-CNN) classification model based on stroke EEG signal. The model includes four convolutional blocks, a global average pooling layer, a dropout layer, and a SoftMax layer. Each convolution block consists of two convolution layers and a pool layer for extracting features and reducing the number of parameters. A one-dimensional convolution kernel is used in order to match the characteristics of EEG one-dimensional time domain signal. The model can automatically extract the features of stroke EEG signal for classifying stroke by using convolutional layers. The EEG data of clinical stroke patients collected from the neurology department of a hospital are used in the experiments. Long Short-Term Memory (LSTM) model is also used as a benchmark to achieve end-to-end prediction for verifying the proposed model performance. The experimental results show that the proposed 1D-CNN prediction model has good prediction performance, with an accuracy of 90.53%, a precision of 87.90%, a sensitivity of 91.60%, and a specificity of 89.65%. It is much higher than the prediction result of LSTM model.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123652213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Si Tian, Yating Gao, Guoquan Yuan, Ru Zhang, Jinmeng Zhao, Song Zhang
Network traffic classification has become an important part of network management, which is conducive to realizing intelligent network operation and maintenance, improving network quality of service (QoS), and ensuring network security. With the rapid development of various applications and protocols, more and more encrypted traffic appears in the network. Due to the loss of semantic information after traffic encryption, poor content intelligibility, and difficulty in feature extraction, traditional detection methods are no longer applicable. Existing solutions mainly rely on the powerful feature self-learning ability of end-to-end deep neural networks to identify encrypted traffic. However, such methods are overly dependent on data size, and it has been experimentally proven that it is often difficult to achieve satisfactory results when validating across datasets. In order to solve this problem, this paper proposes an encrypted traffic identification method based on contrastive learning. First, the clustering method is used to expand the labeled data set. When the encrypted traffic features are difficult to extract, it is only necessary to learn the feature space to achieve discrimination.more suitable for encrypted traffic identification. When validating across datasets, only fine-tuning is required on a small amount of labeled data to achieve good recognition results. Compared with the end-to-end learning method, there is an improvement of about 5%. CCS CONCEPTS • Security and privacy • Network security • Security protocols
{"title":"An encrypted traffic classification method based on contrastive learning","authors":"Si Tian, Yating Gao, Guoquan Yuan, Ru Zhang, Jinmeng Zhao, Song Zhang","doi":"10.1145/3571662.3571678","DOIUrl":"https://doi.org/10.1145/3571662.3571678","url":null,"abstract":"Network traffic classification has become an important part of network management, which is conducive to realizing intelligent network operation and maintenance, improving network quality of service (QoS), and ensuring network security. With the rapid development of various applications and protocols, more and more encrypted traffic appears in the network. Due to the loss of semantic information after traffic encryption, poor content intelligibility, and difficulty in feature extraction, traditional detection methods are no longer applicable. Existing solutions mainly rely on the powerful feature self-learning ability of end-to-end deep neural networks to identify encrypted traffic. However, such methods are overly dependent on data size, and it has been experimentally proven that it is often difficult to achieve satisfactory results when validating across datasets. In order to solve this problem, this paper proposes an encrypted traffic identification method based on contrastive learning. First, the clustering method is used to expand the labeled data set. When the encrypted traffic features are difficult to extract, it is only necessary to learn the feature space to achieve discrimination.more suitable for encrypted traffic identification. When validating across datasets, only fine-tuning is required on a small amount of labeled data to achieve good recognition results. Compared with the end-to-end learning method, there is an improvement of about 5%. CCS CONCEPTS • Security and privacy • Network security • Security protocols","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115895417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}