Si Tian, Yating Gao, Guoquan Yuan, Ru Zhang, Jinmeng Zhao, Song Zhang
Network traffic classification has become an important part of network management, which is conducive to realizing intelligent network operation and maintenance, improving network quality of service (QoS), and ensuring network security. With the rapid development of various applications and protocols, more and more encrypted traffic appears in the network. Due to the loss of semantic information after traffic encryption, poor content intelligibility, and difficulty in feature extraction, traditional detection methods are no longer applicable. Existing solutions mainly rely on the powerful feature self-learning ability of end-to-end deep neural networks to identify encrypted traffic. However, such methods are overly dependent on data size, and it has been experimentally proven that it is often difficult to achieve satisfactory results when validating across datasets. In order to solve this problem, this paper proposes an encrypted traffic identification method based on contrastive learning. First, the clustering method is used to expand the labeled data set. When the encrypted traffic features are difficult to extract, it is only necessary to learn the feature space to achieve discrimination.more suitable for encrypted traffic identification. When validating across datasets, only fine-tuning is required on a small amount of labeled data to achieve good recognition results. Compared with the end-to-end learning method, there is an improvement of about 5%. CCS CONCEPTS • Security and privacy • Network security • Security protocols
{"title":"An encrypted traffic classification method based on contrastive learning","authors":"Si Tian, Yating Gao, Guoquan Yuan, Ru Zhang, Jinmeng Zhao, Song Zhang","doi":"10.1145/3571662.3571678","DOIUrl":"https://doi.org/10.1145/3571662.3571678","url":null,"abstract":"Network traffic classification has become an important part of network management, which is conducive to realizing intelligent network operation and maintenance, improving network quality of service (QoS), and ensuring network security. With the rapid development of various applications and protocols, more and more encrypted traffic appears in the network. Due to the loss of semantic information after traffic encryption, poor content intelligibility, and difficulty in feature extraction, traditional detection methods are no longer applicable. Existing solutions mainly rely on the powerful feature self-learning ability of end-to-end deep neural networks to identify encrypted traffic. However, such methods are overly dependent on data size, and it has been experimentally proven that it is often difficult to achieve satisfactory results when validating across datasets. In order to solve this problem, this paper proposes an encrypted traffic identification method based on contrastive learning. First, the clustering method is used to expand the labeled data set. When the encrypted traffic features are difficult to extract, it is only necessary to learn the feature space to achieve discrimination.more suitable for encrypted traffic identification. When validating across datasets, only fine-tuning is required on a small amount of labeled data to achieve good recognition results. Compared with the end-to-end learning method, there is an improvement of about 5%. CCS CONCEPTS • Security and privacy • Network security • Security protocols","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115895417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teng Wang, Fenglian Li, Xueying Zhang, Lixia Huang, Wenhui Jia
Stroke is an acute cerebrovascular disease with high mortality and disability. Computer-aided interventional diagnosis is a necessary measure to improve the efficiency of stroke diagnosis by using modern advanced medical instruments and machine learning methods. Electroencephalogram (EEG) as a diagnostic means, is a test that measures the electrical activity of the brain through electrodes attached to the scalp to find changes in brain activity. EEG detection has the advantages of low cost, simple and easy to implement, and no physical harm and psychological stress to patients. Studies have shown that EEG signal might be useful in diagnosing stroke. By using machine learning methods, EEG signals can be used to classify stroke patients and normal subjects, or subtypes. Stroke is generally divided into two types: ischemic stroke and hemorrhagic stroke. How to classify ischemic and hemorrhagic strokes based on stroke patients’ EEG data by constructing prediction model is the main purpose on this paper. In recent years, researchers have developed many technologies in the field of stroke classification prediction based on EEG signals, using a variety of machine learning methods to ensure the improvement of prediction accuracy. The typical methods usually extract the time domain, frequency domain or spatial domain features of EEG signals before establishing a stroke classification model. However, the quality of the extracted features cannot be guaranteed in stroke patient or subtype classification. In addition, EEG feature extraction is usually computationally expensive. The main goal of this paper is to propose a novel classification prediction model using an end-to-end deep neural network that avoids the process of manual feature extraction. This paper proposes a one-dimensional convolutional neural network (1D-CNN) classification model based on stroke EEG signal. The model includes four convolutional blocks, a global average pooling layer, a dropout layer, and a SoftMax layer. Each convolution block consists of two convolution layers and a pool layer for extracting features and reducing the number of parameters. A one-dimensional convolution kernel is used in order to match the characteristics of EEG one-dimensional time domain signal. The model can automatically extract the features of stroke EEG signal for classifying stroke by using convolutional layers. The EEG data of clinical stroke patients collected from the neurology department of a hospital are used in the experiments. Long Short-Term Memory (LSTM) model is also used as a benchmark to achieve end-to-end prediction for verifying the proposed model performance. The experimental results show that the proposed 1D-CNN prediction model has good prediction performance, with an accuracy of 90.53%, a precision of 87.90%, a sensitivity of 91.60%, and a specificity of 89.65%. It is much higher than the prediction result of LSTM model.
{"title":"A 1D-CNN prediction model for stroke classification based on EEG signal","authors":"Teng Wang, Fenglian Li, Xueying Zhang, Lixia Huang, Wenhui Jia","doi":"10.1145/3571662.3571695","DOIUrl":"https://doi.org/10.1145/3571662.3571695","url":null,"abstract":"Stroke is an acute cerebrovascular disease with high mortality and disability. Computer-aided interventional diagnosis is a necessary measure to improve the efficiency of stroke diagnosis by using modern advanced medical instruments and machine learning methods. Electroencephalogram (EEG) as a diagnostic means, is a test that measures the electrical activity of the brain through electrodes attached to the scalp to find changes in brain activity. EEG detection has the advantages of low cost, simple and easy to implement, and no physical harm and psychological stress to patients. Studies have shown that EEG signal might be useful in diagnosing stroke. By using machine learning methods, EEG signals can be used to classify stroke patients and normal subjects, or subtypes. Stroke is generally divided into two types: ischemic stroke and hemorrhagic stroke. How to classify ischemic and hemorrhagic strokes based on stroke patients’ EEG data by constructing prediction model is the main purpose on this paper. In recent years, researchers have developed many technologies in the field of stroke classification prediction based on EEG signals, using a variety of machine learning methods to ensure the improvement of prediction accuracy. The typical methods usually extract the time domain, frequency domain or spatial domain features of EEG signals before establishing a stroke classification model. However, the quality of the extracted features cannot be guaranteed in stroke patient or subtype classification. In addition, EEG feature extraction is usually computationally expensive. The main goal of this paper is to propose a novel classification prediction model using an end-to-end deep neural network that avoids the process of manual feature extraction. This paper proposes a one-dimensional convolutional neural network (1D-CNN) classification model based on stroke EEG signal. The model includes four convolutional blocks, a global average pooling layer, a dropout layer, and a SoftMax layer. Each convolution block consists of two convolution layers and a pool layer for extracting features and reducing the number of parameters. A one-dimensional convolution kernel is used in order to match the characteristics of EEG one-dimensional time domain signal. The model can automatically extract the features of stroke EEG signal for classifying stroke by using convolutional layers. The EEG data of clinical stroke patients collected from the neurology department of a hospital are used in the experiments. Long Short-Term Memory (LSTM) model is also used as a benchmark to achieve end-to-end prediction for verifying the proposed model performance. The experimental results show that the proposed 1D-CNN prediction model has good prediction performance, with an accuracy of 90.53%, a precision of 87.90%, a sensitivity of 91.60%, and a specificity of 89.65%. It is much higher than the prediction result of LSTM model.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":"67 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123652213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Popularity prediction of micro videos on multimedia is a hotly studied topic due to the widespread use of video upload sharing services. It’s also a challenging task because popular pattern is affected by multiple factors and is hard to be modeled. The goal of this paper is to use feature extraction techniques and variation auto-encoder (VAE) framework to predict the popularity of online micro-videos. First, we identify four declarable modalities that are important for adaptability and expansibility. Then, we design a multi-modal based VAE regression model (MASSL) to exploit the domestic and foreign information extracted from heterogeneous features. The model can be applied to large-scale multimedia platforms, even the modality absence scenarios. With extensive experiments conducted on the dataset, which was originally generated from the most popular video-sharing website in China, the result demonstrates the effectiveness of our proposed model by comparing with baseline approaches.
{"title":"Multi-modal Variational Auto-Encoder Model for Micro-video Popularity Prediction","authors":"Zhuoran Zhang, Shibiao Xu, Li Guo, Wenke Lian","doi":"10.1145/3571662.3571664","DOIUrl":"https://doi.org/10.1145/3571662.3571664","url":null,"abstract":"Popularity prediction of micro videos on multimedia is a hotly studied topic due to the widespread use of video upload sharing services. It’s also a challenging task because popular pattern is affected by multiple factors and is hard to be modeled. The goal of this paper is to use feature extraction techniques and variation auto-encoder (VAE) framework to predict the popularity of online micro-videos. First, we identify four declarable modalities that are important for adaptability and expansibility. Then, we design a multi-modal based VAE regression model (MASSL) to exploit the domestic and foreign information extracted from heterogeneous features. The model can be applied to large-scale multimedia platforms, even the modality absence scenarios. With extensive experiments conducted on the dataset, which was originally generated from the most popular video-sharing website in China, the result demonstrates the effectiveness of our proposed model by comparing with baseline approaches.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A large number of diverse data sets are necessary for networks to predict human body parameters and reconstruct 3D body models from images. Due to the high cost of motion capture and body scanning, high precision pose and body shape parameters are difficult to obtain. Meanwhile, existing datasets cannot meet the requirements in terms of diversity, size, and data accuracy for practical applications. Inspired by the construction schemes of various datasets, we design and construct a large multi-view 3D human body reconstruction dataset (3DMVHumanBP) with more types of supervised data. By recording the different poses of 25 women and 25 men in a green screen laboratory from six perspectives, we constructed a complete large multi-view 3D body posture dataset containing 340, 000 images. It is worth noting that, we innovatively propose a body dimension prior to the constrained human parametric model construction strategy to provide high-precision ground truth parameters of the human body SMPL models. In addition, we also designed a dense UV data generation method based on human body boundary and mask mapping to provide high-quality dense UV data, which more closely fits the features of the human images. It makes up for the defect that few existing data sets can only provide sparse UV data. In the experiment, the effectiveness and advantages of the data set constructed by us in network training are verified. Compared with the training of existing datasets, the mainstream network models trained on our datasets can significantly improve their prediction accuracy and robustness, thanks to the monitoring data of multiple kinds of high-precision human model parameters provided by 3DMVHumanBP. We hope that the human body dataset construction scheme we designed can provide ideas for building large-scale high precision human body datasets in the future.
{"title":"Multi-view 3D Human Physique Dataset Construction For Robust Digital Human Modeling of Natural Scenes","authors":"Weitao Lin, Jiguang Zhang, Zhaohui Zhang, Shibiao Xu, Hao Xu, Xiaopeng Zhang","doi":"10.1145/3571662.3571675","DOIUrl":"https://doi.org/10.1145/3571662.3571675","url":null,"abstract":"A large number of diverse data sets are necessary for networks to predict human body parameters and reconstruct 3D body models from images. Due to the high cost of motion capture and body scanning, high precision pose and body shape parameters are difficult to obtain. Meanwhile, existing datasets cannot meet the requirements in terms of diversity, size, and data accuracy for practical applications. Inspired by the construction schemes of various datasets, we design and construct a large multi-view 3D human body reconstruction dataset (3DMVHumanBP) with more types of supervised data. By recording the different poses of 25 women and 25 men in a green screen laboratory from six perspectives, we constructed a complete large multi-view 3D body posture dataset containing 340, 000 images. It is worth noting that, we innovatively propose a body dimension prior to the constrained human parametric model construction strategy to provide high-precision ground truth parameters of the human body SMPL models. In addition, we also designed a dense UV data generation method based on human body boundary and mask mapping to provide high-quality dense UV data, which more closely fits the features of the human images. It makes up for the defect that few existing data sets can only provide sparse UV data. In the experiment, the effectiveness and advantages of the data set constructed by us in network training are verified. Compared with the training of existing datasets, the mainstream network models trained on our datasets can significantly improve their prediction accuracy and robustness, thanks to the monitoring data of multiple kinds of high-precision human model parameters provided by 3DMVHumanBP. We hope that the human body dataset construction scheme we designed can provide ideas for building large-scale high precision human body datasets in the future.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131103639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
More and more complex services composed of a series of sequentially arranged middleboxes which are mainly used to meet the requirements of advanced services such as security services, auditing services, monitoring services, personalized enterprise services, and so forth, are increasingly deployed in cloud data centers of public cloud. SFC (Service Function Chaining) is a technique that facilitates the enforcement of complex services and differentiated traffic forwarding policies, dynamically steering the traffic through an ordered list of service functions. Flow table-based traffic steering scheme is commonly adopted in SDN-enabled scenarios, which consumes too many flow entries and is unsuitable for large-scale public clouds in steering traffic between VNFs (Virtual Network Function) inside of VPC (Virtual Private Cloud). Legacy PBR (Policy-based Routing) based schemes which are widely used in traditional physical networks cannot fulfill the requirements of fully distributed routing architectures of large-scale public clouds. In this paper, we present a PBR and unsymmetrical NAT (Network Address Translation) converged scheme to structure SFC in a fully distributed routing architecture. The scheme uses distributed PBR rules to steer traffic between an ordered list of VNFs located on different nodes while performing NAT on different nodes for ingress/egress traffic of a specific flow to avoid asymmetry of packet headers which may lead to failures of communication. The proposed scheme brings no overhead in data transmission, eliminates extra configurations on each middle box of the chain, and is scalable to support the scenarios of large-scale public cloud.
{"title":"Traffic Steering in Large-scale Public Cloud","authors":"Zhangfeng Hu, Siqing Sun, Ping Yin, Yanjun Li, Qiuzheng Ren, Baozhu Li, Xiong Li","doi":"10.1145/3571662.3571691","DOIUrl":"https://doi.org/10.1145/3571662.3571691","url":null,"abstract":"More and more complex services composed of a series of sequentially arranged middleboxes which are mainly used to meet the requirements of advanced services such as security services, auditing services, monitoring services, personalized enterprise services, and so forth, are increasingly deployed in cloud data centers of public cloud. SFC (Service Function Chaining) is a technique that facilitates the enforcement of complex services and differentiated traffic forwarding policies, dynamically steering the traffic through an ordered list of service functions. Flow table-based traffic steering scheme is commonly adopted in SDN-enabled scenarios, which consumes too many flow entries and is unsuitable for large-scale public clouds in steering traffic between VNFs (Virtual Network Function) inside of VPC (Virtual Private Cloud). Legacy PBR (Policy-based Routing) based schemes which are widely used in traditional physical networks cannot fulfill the requirements of fully distributed routing architectures of large-scale public clouds. In this paper, we present a PBR and unsymmetrical NAT (Network Address Translation) converged scheme to structure SFC in a fully distributed routing architecture. The scheme uses distributed PBR rules to steer traffic between an ordered list of VNFs located on different nodes while performing NAT on different nodes for ingress/egress traffic of a specific flow to avoid asymmetry of packet headers which may lead to failures of communication. The proposed scheme brings no overhead in data transmission, eliminates extra configurations on each middle box of the chain, and is scalable to support the scenarios of large-scale public cloud.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134082757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While reducing costs and improving data security, the new generation of informatics technologies such as blockchain also face problems of operation efficiency and privacy leakage, which have attracted extensive attention from researchers. Digital signature is one of the key technologies to solve the above problems. The group signature algorithm has the dual characteristics of protecting the privacy of signer identity and tracing effectively when disputes occur. The scheme we proposed can simultaneously solve the low efficiency of signature verification caused by the high time-consuming bilinear pairwise operation in existing group signature algorithms and the privacy leakage of signers caused by the vulnerability of single group administrators to malicious attacks. Compared with the SM2 digital signature algorithm of Chinese cryptographic standard, the proposed scheme increases the signature anonymization while maintaining the same signature and verification efficiency as the SM2 signature algorithm. Compared with Yang et al. 's scheme, the main computation overhead and communication bandwidth of the proposed protocol are significantly reduced. Therefore, the design scheme in this paper has stronger practicability and is more suitable for scenarios that require both efficiency and strong privacy protection, such as blockchain, anonymous certificate, electronic cash and electronic voting.
{"title":"An Identity-based Group Signature Approach on Decentralized System and Chinese Cryptographic SM2","authors":"Jiaxi Liu, Tianyu Kang, LingNa Guo","doi":"10.1145/3571662.3571683","DOIUrl":"https://doi.org/10.1145/3571662.3571683","url":null,"abstract":"While reducing costs and improving data security, the new generation of informatics technologies such as blockchain also face problems of operation efficiency and privacy leakage, which have attracted extensive attention from researchers. Digital signature is one of the key technologies to solve the above problems. The group signature algorithm has the dual characteristics of protecting the privacy of signer identity and tracing effectively when disputes occur. The scheme we proposed can simultaneously solve the low efficiency of signature verification caused by the high time-consuming bilinear pairwise operation in existing group signature algorithms and the privacy leakage of signers caused by the vulnerability of single group administrators to malicious attacks. Compared with the SM2 digital signature algorithm of Chinese cryptographic standard, the proposed scheme increases the signature anonymization while maintaining the same signature and verification efficiency as the SM2 signature algorithm. Compared with Yang et al. 's scheme, the main computation overhead and communication bandwidth of the proposed protocol are significantly reduced. Therefore, the design scheme in this paper has stronger practicability and is more suitable for scenarios that require both efficiency and strong privacy protection, such as blockchain, anonymous certificate, electronic cash and electronic voting.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132122655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recognition of communication relationships under Non-cooperative conditions is significant for understanding the network composition of unknown targets, inferring network topology, and identifying key nodes, which is a prerequisite and basis for conducting efficient electronic countermeasures. However, under Non-cooperative conditions, for prior knowledge related to the target network is difficult to obtain, the communication relationships recognition faces enormous challenges. To address this issue, we construct a system model, analyze the mechanism of wireless communication interaction, extract feature series of signals from spectrum monitoring data, and propose a Transformer-based algorithm for recognizing target network communication relationships. This paper conducts simulation experiments in different scenarios to compare the Transformer-based communication relation recognition algorithm with the other four methods, such as SVM, CNN-based recognition algorithm, ResNet-based recognition algorithm, and LSTM-based recognition algorithm, respectively. And results demonstrate that the proposed algorithm shows high recognition accuracy, good anti-interference performance, and robustness.
{"title":"Recognition of Non-cooperative Radio Communication Relationships Based on Transformer","authors":"Dejun He, Xinrong Wu, Lu Yu, Tianchi Wang","doi":"10.1145/3571662.3571688","DOIUrl":"https://doi.org/10.1145/3571662.3571688","url":null,"abstract":"The recognition of communication relationships under Non-cooperative conditions is significant for understanding the network composition of unknown targets, inferring network topology, and identifying key nodes, which is a prerequisite and basis for conducting efficient electronic countermeasures. However, under Non-cooperative conditions, for prior knowledge related to the target network is difficult to obtain, the communication relationships recognition faces enormous challenges. To address this issue, we construct a system model, analyze the mechanism of wireless communication interaction, extract feature series of signals from spectrum monitoring data, and propose a Transformer-based algorithm for recognizing target network communication relationships. This paper conducts simulation experiments in different scenarios to compare the Transformer-based communication relation recognition algorithm with the other four methods, such as SVM, CNN-based recognition algorithm, ResNet-based recognition algorithm, and LSTM-based recognition algorithm, respectively. And results demonstrate that the proposed algorithm shows high recognition accuracy, good anti-interference performance, and robustness.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132243782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chi Jiang, Li Xiao Zhang, Wu Yong Zhao, Jie Shu Lei, Wei Zhi Huang
Aiming at the uniform circular array model of conformal antenna array, we proposed a spatial spectrum estimation algorithm of polarization sensitive array based on compensating spatial domain manifold matrix. Because the conformal antenna is highly sensitive to the polarization information of the incident signal, traditional spatial spectrum direction-finding algorithm is not suitable. Meanwhile, when the classical polarization sensitive array spatial spectrum estimation algorithm is adopted, the interference generated by the anti-radiation detection system in the case of multipath, signal refraction and diffraction will be directly introduced into the model of the polarization sensitive array spatial spectrum finding theory, and then, resulting in a large estimation error of direction of arrival (DOA) and polarization parameters. The algorithm compensates the spatial domain components of the spatial domain array manifold matrix, which combine with the multiple signal classification (MUSIC) DOA estimation algorithm to construct a four-dimensional polarization sensitive array spatial spectrum function. And then, applying the reducing dimension spectral peak search to achieve the two-dimensional DOA and polarization parameters estimation of the target signal. Compared with the classical polarization sensitive array MUSIC direction-finding algorithm, the algorithm we explored can suppress the front-end error of the system, avoid the mismatch between the spatial domain components and the theoretical model of the algorithm, and realize the high precision direction-finding and tracking of the target signal.
{"title":"Spatial spectrum estimation algorithm of polarization sensitive array based on compensating spatial domain manifold matrix","authors":"Chi Jiang, Li Xiao Zhang, Wu Yong Zhao, Jie Shu Lei, Wei Zhi Huang","doi":"10.1145/3571662.3571684","DOIUrl":"https://doi.org/10.1145/3571662.3571684","url":null,"abstract":"Aiming at the uniform circular array model of conformal antenna array, we proposed a spatial spectrum estimation algorithm of polarization sensitive array based on compensating spatial domain manifold matrix. Because the conformal antenna is highly sensitive to the polarization information of the incident signal, traditional spatial spectrum direction-finding algorithm is not suitable. Meanwhile, when the classical polarization sensitive array spatial spectrum estimation algorithm is adopted, the interference generated by the anti-radiation detection system in the case of multipath, signal refraction and diffraction will be directly introduced into the model of the polarization sensitive array spatial spectrum finding theory, and then, resulting in a large estimation error of direction of arrival (DOA) and polarization parameters. The algorithm compensates the spatial domain components of the spatial domain array manifold matrix, which combine with the multiple signal classification (MUSIC) DOA estimation algorithm to construct a four-dimensional polarization sensitive array spatial spectrum function. And then, applying the reducing dimension spectral peak search to achieve the two-dimensional DOA and polarization parameters estimation of the target signal. Compared with the classical polarization sensitive array MUSIC direction-finding algorithm, the algorithm we explored can suppress the front-end error of the system, avoid the mismatch between the spatial domain components and the theoretical model of the algorithm, and realize the high precision direction-finding and tracking of the target signal.","PeriodicalId":235407,"journal":{"name":"Proceedings of the 8th International Conference on Communication and Information Processing","volume":"1 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120915517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}