Pub Date : 2024-03-31DOI: 10.23919/cje.2022.00.448
Xiao Ma;Dan Li;Liang Wang;Weijia Han;Nan Zhao
With the rapid development of wireless communications, cellular communication and distributed wireless network are fragile to eavesdropping due to distributed users and transparent communication. However, to adopt bigger transmit power at a given area to interfere potential eavesdroppers not only incurs huge energy waste but also may suppresses regular communication in this area. To this end, we focus on secure communication in multi-hop wireless communication network, and propose two communicating while jamming schemes for secure communication in presence of potential eavesdroppers for the narrow band and broad band point-to-point (P2P) systems respectively with the aid of artificial noise transmitted by a chosen cooperative interferer. Furthermore, to achieve the end-to-end (E2E) multi-hop secure communication, we devise the secure network topology discovering scheme via constructing a proper network topology with at least one proper node as the cooperative interferer in each hop, and then propose the secure transmission path planning scheme to find an E2E secure transmission route from source to destination, respectively. Experiments on the wireless open-access research platform demonstrate the feasibility of the proposed schemes. Besides, simulations results validate that the proposed schemes can achieve better performance compared with existing methods in both the P2P communication case and E2E multi-hop communication network scenario.
{"title":"A Secure Communicating While Jamming Approach for End-to-End Multi-Hop Wireless Communication Network","authors":"Xiao Ma;Dan Li;Liang Wang;Weijia Han;Nan Zhao","doi":"10.23919/cje.2022.00.448","DOIUrl":"https://doi.org/10.23919/cje.2022.00.448","url":null,"abstract":"With the rapid development of wireless communications, cellular communication and distributed wireless network are fragile to eavesdropping due to distributed users and transparent communication. However, to adopt bigger transmit power at a given area to interfere potential eavesdroppers not only incurs huge energy waste but also may suppresses regular communication in this area. To this end, we focus on secure communication in multi-hop wireless communication network, and propose two communicating while jamming schemes for secure communication in presence of potential eavesdroppers for the narrow band and broad band point-to-point (P2P) systems respectively with the aid of artificial noise transmitted by a chosen cooperative interferer. Furthermore, to achieve the end-to-end (E2E) multi-hop secure communication, we devise the secure network topology discovering scheme via constructing a proper network topology with at least one proper node as the cooperative interferer in each hop, and then propose the secure transmission path planning scheme to find an E2E secure transmission route from source to destination, respectively. Experiments on the wireless open-access research platform demonstrate the feasibility of the proposed schemes. Besides, simulations results validate that the proposed schemes can achieve better performance compared with existing methods in both the P2P communication case and E2E multi-hop communication network scenario.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"833-846"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543235","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.23919/cje.2022.00.159
Jingming Xia;Yufeng Liu;Ling Tan
Since the computing capacity and battery energy of unmanned aerial vehicle (UAV) are constrained, UAV as aerial user is hard to handle the high computational complexity and time-sensitive applications. This paper investigates a cellular-connected multi-UAV network supported by mobile edge computing. Multiple UAVs carrying tasks fly from a given initial position to a termination position within a specified time. To handle the large number of tasks carried by UAVs, we propose a energy cost of all UAVs based problem to determine how many tasks should be offloaded to high-altitude balloons (HABs) for computing, where UAV-HAB association, the trajectory of UAV, and calculation task splitting are jointly optimized. However, the formulated problem has nonconvex structure. Hence, an efficient iterative algorithm by applying successive convex approximation and the block coordinate descent methods is put forward. Specifically, in each iteration, the UAV-HAB association, calculation task splitting, and UAV trajec-tory are alternately optimized. Especially, for the nonconvex UAV trajectory optimization problem, an approximate convex optimization problem is settled. The numerical results indicate that the scheme of this paper proposed is guaranteed to converge and also significantly reduces the entire power consumption of all UAVs compared to the benchmark schemes.
{"title":"Joint Optimization of Trajectory and Task Offloading for Cellular-Connected Multi-UAV Mobile Edge Computing","authors":"Jingming Xia;Yufeng Liu;Ling Tan","doi":"10.23919/cje.2022.00.159","DOIUrl":"https://doi.org/10.23919/cje.2022.00.159","url":null,"abstract":"Since the computing capacity and battery energy of unmanned aerial vehicle (UAV) are constrained, UAV as aerial user is hard to handle the high computational complexity and time-sensitive applications. This paper investigates a cellular-connected multi-UAV network supported by mobile edge computing. Multiple UAVs carrying tasks fly from a given initial position to a termination position within a specified time. To handle the large number of tasks carried by UAVs, we propose a energy cost of all UAVs based problem to determine how many tasks should be offloaded to high-altitude balloons (HABs) for computing, where UAV-HAB association, the trajectory of UAV, and calculation task splitting are jointly optimized. However, the formulated problem has nonconvex structure. Hence, an efficient iterative algorithm by applying successive convex approximation and the block coordinate descent methods is put forward. Specifically, in each iteration, the UAV-HAB association, calculation task splitting, and UAV trajec-tory are alternately optimized. Especially, for the nonconvex UAV trajectory optimization problem, an approximate convex optimization problem is settled. The numerical results indicate that the scheme of this paper proposed is guaranteed to converge and also significantly reduces the entire power consumption of all UAVs compared to the benchmark schemes.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"823-832"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.23919/cje.2022.00.306
Juanying Xie;Ying Peng;Mingzhao Wang
Head and neck cancer is one of the most common malignancies in the world. We propose SE-nnU-Net by adapting SE (squeeze and excitation) normalization into nnU-Net, so as to segment head and neck tumors in PET/CT images by combining advantages of SE capturing features of interest regions and nnU-Net configuring itself for a specific task. The basic module referred to convolution-ReLU-SE is designed for SE-nnU-Net. In the encoder it is combined with residual structure while in the decoder without residual structure. The loss function combines Dice loss and Focal loss. The specific data preprocessing and augmentation techniques are developed, and specific network architecture is designed. Moreover, the deep supervised mechanism is introduced to calculate the loss function using the last four layers of the decoder of SE-nnU-Net. This SE-nnU-net is applied to HECKTOR 2020 and HECKTOR 2021 challenges, respectively, using different experimental design. The experimental results show that SE-nnU-Net for HECKTOR 2020 obtained 0.745, 0.821, and 0.725 in terms of Dice, Precision, and Recall, respectively, while the SE-nnU-Net for HECKTOR 2021 obtains 0.778 and 3.088 in terms of Dice and median HD95, respectively. This SE-nnU-Net for segmenting head and neck tumors can provide auxiliary opinions for doctors' diagnoses.
{"title":"The Squeeze & Excitation Normalization Based nnU-Net for Segmenting Head & Neck Tumors","authors":"Juanying Xie;Ying Peng;Mingzhao Wang","doi":"10.23919/cje.2022.00.306","DOIUrl":"https://doi.org/10.23919/cje.2022.00.306","url":null,"abstract":"Head and neck cancer is one of the most common malignancies in the world. We propose SE-nnU-Net by adapting SE (squeeze and excitation) normalization into nnU-Net, so as to segment head and neck tumors in PET/CT images by combining advantages of SE capturing features of interest regions and nnU-Net configuring itself for a specific task. The basic module referred to convolution-ReLU-SE is designed for SE-nnU-Net. In the encoder it is combined with residual structure while in the decoder without residual structure. The loss function combines Dice loss and Focal loss. The specific data preprocessing and augmentation techniques are developed, and specific network architecture is designed. Moreover, the deep supervised mechanism is introduced to calculate the loss function using the last four layers of the decoder of SE-nnU-Net. This SE-nnU-net is applied to HECKTOR 2020 and HECKTOR 2021 challenges, respectively, using different experimental design. The experimental results show that SE-nnU-Net for HECKTOR 2020 obtained 0.745, 0.821, and 0.725 in terms of Dice, Precision, and Recall, respectively, while the SE-nnU-Net for HECKTOR 2021 obtains 0.778 and 3.088 in terms of Dice and median HD95, respectively. This SE-nnU-Net for segmenting head and neck tumors can provide auxiliary opinions for doctors' diagnoses.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"766-775"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543238","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.23919/cje.2022.00.419
Yi Zhang;Kai Zhang;Ting Cui
Related-key model is a favourable approach to improve attacks on block ciphers with a simple key schedule. However, to the best of our knowledge, there are a few results in which zero-correlation linear attacks take advantage of the related-key model. We ascribe this phenomenon to the lack of consideration of the key input in zero-correlation linear attacks. Concentrating on the linear key schedule of a block cipher, we generalize the zero-correlation linear attack by using a related-key setting. Specifically, we propose the creation of generalized linear hulls (GLHs) when the key input is involved; moreover, we indicate the links between GLHs and conventional linear hulls (CLHs). Then, we prove that the existence of zero-correlation GLHs is completely determined by the corresponding CLHs and the linear key schedule. In addition, we introduce a method to construct zero-correlation GLHs by CLHs and transform them into an integral distinguisher. The correctness is verified by applying it to SIMON16/16, a SIMON-like toy cipher. Based on our method, we find 12/13/14/15/15/17/20/22-round related-key zero-correlation linear distinguishers of SIMON32/64, SIMON48/72, SIMON48/96, SIMON64/96, SIMON64/128, SIMON96/144, SIMON128/192 and SIMON128/256, respectively. As far as we know, these distinguishers are one, two, or three rounds longer than current best zero-correlation linear distinguishers of SIMON.
{"title":"Related-Key Zero-Correlation Linear Attacks on Block Ciphers with Linear Key Schedules","authors":"Yi Zhang;Kai Zhang;Ting Cui","doi":"10.23919/cje.2022.00.419","DOIUrl":"https://doi.org/10.23919/cje.2022.00.419","url":null,"abstract":"Related-key model is a favourable approach to improve attacks on block ciphers with a simple key schedule. However, to the best of our knowledge, there are a few results in which zero-correlation linear attacks take advantage of the related-key model. We ascribe this phenomenon to the lack of consideration of the key input in zero-correlation linear attacks. Concentrating on the linear key schedule of a block cipher, we generalize the zero-correlation linear attack by using a related-key setting. Specifically, we propose the creation of generalized linear hulls (GLHs) when the key input is involved; moreover, we indicate the links between GLHs and conventional linear hulls (CLHs). Then, we prove that the existence of zero-correlation GLHs is completely determined by the corresponding CLHs and the linear key schedule. In addition, we introduce a method to construct zero-correlation GLHs by CLHs and transform them into an integral distinguisher. The correctness is verified by applying it to SIMON16/16, a SIMON-like toy cipher. Based on our method, we find 12/13/14/15/15/17/20/22-round related-key zero-correlation linear distinguishers of SIMON32/64, SIMON48/72, SIMON48/96, SIMON64/96, SIMON64/128, SIMON96/144, SIMON128/192 and SIMON128/256, respectively. As far as we know, these distinguishers are one, two, or three rounds longer than current best zero-correlation linear distinguishers of SIMON.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"672-682"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543213","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.23919/cje.2022.00.179
Hao Wang;Jinwei Wang;Xuelong Hu;Bingtao Hu;Qilin Yin;Xiangyang Luo;Bin Ma;Jinsheng Sun
Detection of color images that have undergone double compression is a critical aspect of digital image forensics. Despite the existence of various methods capable of detecting double Joint Photographic Experts Group (JPEG) compression, they are unable to address the issue of mixed double compression resulting from the use of different compression standards. In particular, the implementation of Joint Photographic Experts Group 2000 (JPEG2000) as the secondary compression standard can result in a decline or complete loss of performance in existing methods. To tackle this challenge of JPEG+JPEG2000 compression, a detection method based on quaternion convolutional neural networks (QCNN) is proposed. The QCNN processes the data as a quaternion, transforming the components of a traditional convolutional neural network (CNN) into a quaternion representation. The relationships between the color channels of the image are preserved, and the utilization of color information is optimized. Additionally, the method includes a feature conversion module that converts the extracted features into quaternion statistical features, thereby amplifying the evidence of double compression. Experimental results indicate that the proposed QCNN-based method improves, on average, by 27% compared to existing methods in the detection of JPEG+JPEG2000 compression.
{"title":"Detecting Double Mixed Compressed Images Based on Quaternion Convolutional Neural Network","authors":"Hao Wang;Jinwei Wang;Xuelong Hu;Bingtao Hu;Qilin Yin;Xiangyang Luo;Bin Ma;Jinsheng Sun","doi":"10.23919/cje.2022.00.179","DOIUrl":"https://doi.org/10.23919/cje.2022.00.179","url":null,"abstract":"Detection of color images that have undergone double compression is a critical aspect of digital image forensics. Despite the existence of various methods capable of detecting double Joint Photographic Experts Group (JPEG) compression, they are unable to address the issue of mixed double compression resulting from the use of different compression standards. In particular, the implementation of Joint Photographic Experts Group 2000 (JPEG2000) as the secondary compression standard can result in a decline or complete loss of performance in existing methods. To tackle this challenge of JPEG+JPEG2000 compression, a detection method based on quaternion convolutional neural networks (QCNN) is proposed. The QCNN processes the data as a quaternion, transforming the components of a traditional convolutional neural network (CNN) into a quaternion representation. The relationships between the color channels of the image are preserved, and the utilization of color information is optimized. Additionally, the method includes a feature conversion module that converts the extracted features into quaternion statistical features, thereby amplifying the evidence of double compression. Experimental results indicate that the proposed QCNN-based method improves, on average, by 27% compared to existing methods in the detection of JPEG+JPEG2000 compression.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"657-671"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.23919/cje.2022.00.363
Junfeng Tian;Zhengqi Hou
Most of the current research on user friendship speculation in location-based social networks is based on the co-occurrence characteristics of users, however, statistics find that co-occurrence is not common among all users; meanwhile, most of the existing work focuses on mining more features to improve the accuracy but ignoring the time complexity in practical applications. On this basis, a friendship inference model named ITSIC is proposed based on the similarity of user interest tracks and joint user location co-occurrence. By utilizing MeanShift clustering algorithm, ITSIC clustered and filtered user check-ins and divided the dataset into interesting, abnormal, and noise check-ins. User interest trajectories were constructed from user interest check-in data, which allows ITSIC to work efficiently even for users without co-occurrences. At the same time, by application of clustering, the single-moment multi-interest trajectory was further proposed, which increased the richness of the meaning of the trajectory moment. The extensive experiments on two real online social network datasets show that ITSIC outperforms existing methods in terms of AUC score and time efficiency compared to existing methods.
{"title":"Friendship Inference Based on Interest Trajectory Similarity and Co-Occurrence","authors":"Junfeng Tian;Zhengqi Hou","doi":"10.23919/cje.2022.00.363","DOIUrl":"https://doi.org/10.23919/cje.2022.00.363","url":null,"abstract":"Most of the current research on user friendship speculation in location-based social networks is based on the co-occurrence characteristics of users, however, statistics find that co-occurrence is not common among all users; meanwhile, most of the existing work focuses on mining more features to improve the accuracy but ignoring the time complexity in practical applications. On this basis, a friendship inference model named ITSIC is proposed based on the similarity of user interest tracks and joint user location co-occurrence. By utilizing MeanShift clustering algorithm, ITSIC clustered and filtered user check-ins and divided the dataset into interesting, abnormal, and noise check-ins. User interest trajectories were constructed from user interest check-in data, which allows ITSIC to work efficiently even for users without co-occurrences. At the same time, by application of clustering, the single-moment multi-interest trajectory was further proposed, which increased the richness of the meaning of the trajectory moment. The extensive experiments on two real online social network datasets show that ITSIC outperforms existing methods in terms of AUC score and time efficiency compared to existing methods.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"708-720"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543220","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.23919/cje.2022.00.210
Rulin Zhang;Ruixue Li;Jiakai Liang;Keqiang Yue;Wenjun Li;Yilin Li
Snoring is a widespread occurrence that impacts human sleep quality. It is also one of the earliest symptoms of many sleep disorders. Snoring is accurately detected, making further screening and diagnosis of sleep problems easier. Snoring is frequently ignored because of its underrated and costly detection costs. As a result, this research offered an alternative method for snoring detection based on a long short-term memory based spiking neural network (LSTM-SNN) that is appropriate for large-scale home detection for snoring. We designed acquisition equipment to collect the sleep recordings of 54 subjects and constructed the sleep sound database in the home environment. And Mel frequency cepstral coefficients (MFCCs) were extracted from these sound signals and encoded into spike trains by a threshold encoding approach. They were classified automatically as non-snoring or snoring sounds by our LSTM-SNN model. We used the backpropagation algorithm based on an alternative gradient in the LSTM-SNN to complete the parameter update. The categorization percentage reached an impressive 93.4%, accompanied by a remarkable 36.9% reduction in computer power compared to the regular LSTM model.
{"title":"Long Short-Term Memory Spiking Neural Networks for Classification of Snoring and Non-Snoring Sound Events","authors":"Rulin Zhang;Ruixue Li;Jiakai Liang;Keqiang Yue;Wenjun Li;Yilin Li","doi":"10.23919/cje.2022.00.210","DOIUrl":"https://doi.org/10.23919/cje.2022.00.210","url":null,"abstract":"Snoring is a widespread occurrence that impacts human sleep quality. It is also one of the earliest symptoms of many sleep disorders. Snoring is accurately detected, making further screening and diagnosis of sleep problems easier. Snoring is frequently ignored because of its underrated and costly detection costs. As a result, this research offered an alternative method for snoring detection based on a long short-term memory based spiking neural network (LSTM-SNN) that is appropriate for large-scale home detection for snoring. We designed acquisition equipment to collect the sleep recordings of 54 subjects and constructed the sleep sound database in the home environment. And Mel frequency cepstral coefficients (MFCCs) were extracted from these sound signals and encoded into spike trains by a threshold encoding approach. They were classified automatically as non-snoring or snoring sounds by our LSTM-SNN model. We used the backpropagation algorithm based on an alternative gradient in the LSTM-SNN to complete the parameter update. The categorization percentage reached an impressive 93.4%, accompanied by a remarkable 36.9% reduction in computer power compared to the regular LSTM model.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"793-802"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543234","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.23919/cje.2022.00.156
Ling Liu;Maoxiang Chu;Rongfen Gong;Liming Liu;Yonghui Yang
Compared with support vector machine, large margin distribution machine (LDM) has better generalization performance. The central idea of LDM is to maximize the margin mean and minimize the margin variance simultaneously. But the computational complexity of LDM is high. In order to reduce the computational complexity of LDM, a weighted linear loss LDM (WLLDM) is proposed. The framework of WLLDM is built based on LDM and the weighted linear loss. The weighted linear loss is adopted instead of the hinge loss in WLLDM. This modification can transform the quadratic programming problem into a simple linear equation, resulting in lower computational complexity. Thus, WLLDM has the potential to deal with large-scale datasets. The WLLDM is similar in principle to the LDM algorithm, which can optimize the margin distribution and achieve better generalization performance. The WLLDM algorithm is compared with other models by conducting experiments on different datasets. The experimental results show that the proposed WLLDM has better generalization performance and faster training speed.
{"title":"Weighted Linear Loss Large Margin Distribution Machine for Pattern Classification","authors":"Ling Liu;Maoxiang Chu;Rongfen Gong;Liming Liu;Yonghui Yang","doi":"10.23919/cje.2022.00.156","DOIUrl":"https://doi.org/10.23919/cje.2022.00.156","url":null,"abstract":"Compared with support vector machine, large margin distribution machine (LDM) has better generalization performance. The central idea of LDM is to maximize the margin mean and minimize the margin variance simultaneously. But the computational complexity of LDM is high. In order to reduce the computational complexity of LDM, a weighted linear loss LDM (WLLDM) is proposed. The framework of WLLDM is built based on LDM and the weighted linear loss. The weighted linear loss is adopted instead of the hinge loss in WLLDM. This modification can transform the quadratic programming problem into a simple linear equation, resulting in lower computational complexity. Thus, WLLDM has the potential to deal with large-scale datasets. The WLLDM is similar in principle to the LDM algorithm, which can optimize the margin distribution and achieve better generalization performance. The WLLDM algorithm is compared with other models by conducting experiments on different datasets. The experimental results show that the proposed WLLDM has better generalization performance and faster training speed.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"753-765"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.23919/cje.2022.00.366
Wangxin Feng;Xiangyang Luo;Tengyao Li;Chunfang Yang
Network flow watermarking (NFW) is usually used for flow correlation. By actively modulating some features of the carrier traffic, NFW can establish the correspondence between different network nodes. In the face of strict demands of network traffic tracing, current watermarking methods cannot work efficiently due to the dependence on specific protocols, demand for large quantities of packets, weakness on resisting network channel interferences and so on. To this end, we propose a robust network flow watermarking method based on IP packet sequence, called as IP-Pealing. It is designed to utilize the packet sequence as watermark carrier with IP identification field which is insensitive to time jitter and suitable for all IP based traffic. To enhance the robustness against packet loss and packet reordering, the detection sequence set is constructed in terms of the variation range of packet sequence, correcting the possible errors caused by the network transmission. To improve the detection accuracy, the long watermark information is divided into several short sequences to embed in turn and assembled during detection. By a large number of experiments on the Internet, the overall detection rate and accuracy of IP-Pealing reach 99.91% and 99.42% respectively. In comparison with the classical network flow watermarking methods, such as PROFW, IBW, ICBW, WBIPD and SBTT, the accuracy of IP-Pealing is increased by 13.70% to 54.00%.
网络流量水印(NFW)通常用于流量关联。通过主动调制载波流量的某些特征,NFW 可以建立不同网络节点之间的对应关系。面对网络流量追踪的严格要求,目前的水印方法由于对特定协议的依赖、对大量数据包的需求、抗网络信道干扰能力弱等原因,无法有效发挥作用。为此,我们提出了一种基于 IP 数据包序列的稳健网络流水印方法,称为 IP-Pealing。该方法利用数据包序列作为水印载体,并带有 IP 识别字段,对时间抖动不敏感,适用于所有基于 IP 的流量。为了增强对数据包丢失和数据包重排的鲁棒性,检测序列集是根据数据包序列的变化范围构建的,以纠正网络传输可能造成的错误。为提高检测精度,将长水印信息分成若干短序列依次嵌入,并在检测时进行组合。通过在互联网上的大量实验,IP-Pealing 的总体检测率和准确率分别达到了 99.91% 和 99.42%。与 PROFW、IBW、ICBW、WBIPD 和 SBTT 等经典网络流量水印方法相比,IP-Pealing 的准确率提高了 13.70% 至 54.00%。
{"title":"IP-Pealing: A Robust Network Flow Watermarking Method Based on IP Packet Sequence","authors":"Wangxin Feng;Xiangyang Luo;Tengyao Li;Chunfang Yang","doi":"10.23919/cje.2022.00.366","DOIUrl":"https://doi.org/10.23919/cje.2022.00.366","url":null,"abstract":"Network flow watermarking (NFW) is usually used for flow correlation. By actively modulating some features of the carrier traffic, NFW can establish the correspondence between different network nodes. In the face of strict demands of network traffic tracing, current watermarking methods cannot work efficiently due to the dependence on specific protocols, demand for large quantities of packets, weakness on resisting network channel interferences and so on. To this end, we propose a robust network flow watermarking method based on IP packet sequence, called as IP-Pealing. It is designed to utilize the packet sequence as watermark carrier with IP identification field which is insensitive to time jitter and suitable for all IP based traffic. To enhance the robustness against packet loss and packet reordering, the detection sequence set is constructed in terms of the variation range of packet sequence, correcting the possible errors caused by the network transmission. To improve the detection accuracy, the long watermark information is divided into several short sequences to embed in turn and assembled during detection. By a large number of experiments on the Internet, the overall detection rate and accuracy of IP-Pealing reach 99.91% and 99.42% respectively. In comparison with the classical network flow watermarking methods, such as PROFW, IBW, ICBW, WBIPD and SBTT, the accuracy of IP-Pealing is increased by 13.70% to 54.00%.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"694-707"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543218","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.23919/cje.2023.00.181
Fei Li;Yiqiang Chen;Yang Gu;Yaowei Wang
The key to synthesizing the features of electronic medical records (EMR) big data and using them for specific medical purposes, such as mortality and phenotype prediction, is to integrate the individual medical event and the overall multivariate time series feature extraction automatically, as well as to alleviate data imbalance problems. This paper provides a general feature extraction method to reduce manual intervention and automatically process large-scale data. The processing uses two variational auto-encoders (VAEs) to automatically extract individual and global features. It avoids the well-known posterior collapse problem of Transformer VAE through a uniquely designed “proportional and stabilizing” mechanism and forms a unique means to alleviate the data imbalance problem. We conducted experiments using ICU-STAY patients' data from the MIMIC-III database and compared them with the mainstream EMR time series processing methods. The results show that the method extracts visible and comprehensive features, alleviates data imbalance problems and improves the accuracy in specific predicting tasks.
{"title":"Extracting Integrated Features of Electronic Medical Records Big Data for Mortality and Phenotype Prediction","authors":"Fei Li;Yiqiang Chen;Yang Gu;Yaowei Wang","doi":"10.23919/cje.2023.00.181","DOIUrl":"https://doi.org/10.23919/cje.2023.00.181","url":null,"abstract":"The key to synthesizing the features of electronic medical records (EMR) big data and using them for specific medical purposes, such as mortality and phenotype prediction, is to integrate the individual medical event and the overall multivariate time series feature extraction automatically, as well as to alleviate data imbalance problems. This paper provides a general feature extraction method to reduce manual intervention and automatically process large-scale data. The processing uses two variational auto-encoders (VAEs) to automatically extract individual and global features. It avoids the well-known posterior collapse problem of Transformer VAE through a uniquely designed “proportional and stabilizing” mechanism and forms a unique means to alleviate the data imbalance problem. We conducted experiments using ICU-STAY patients' data from the MIMIC-III database and compared them with the mainstream EMR time series processing methods. The results show that the method extracts visible and comprehensive features, alleviates data imbalance problems and improves the accuracy in specific predicting tasks.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"776-792"},"PeriodicalIF":1.2,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543236","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}