Pub Date : 2024-01-30DOI: 10.1186/s13634-024-01110-w
Andrew Graff, Todd E. Humphreys
This paper analyzes the fundamental trade-offs that occur in the co-design of pilot resource allocations in orthogonal frequency-division multiplexing signals for both ranging (via time-of-arrival estimation) and communications. These trade-offs are quantified through the Shannon capacity bound, probability of outage, and the Ziv–Zakai bound on range estimation variance. Bounds are derived for signals experiencing frequency-selective Rayleigh block fading, accounting for the impact of limited channel knowledge and multi-antenna reception. Uncompensated carrier frequency offset and phase errors are also factored into the capacity bounds. Analysis based on the derived bounds demonstrates how Pareto-optimal design choices can be made to optimize the communication throughput, probability of outage, and ranging variance. Different pilot resource allocation strategies are then analyzed, showing how Pareto-optimal design choices change depending on the channel.
{"title":"Purposeful co-design of OFDM signals for ranging and communications","authors":"Andrew Graff, Todd E. Humphreys","doi":"10.1186/s13634-024-01110-w","DOIUrl":"https://doi.org/10.1186/s13634-024-01110-w","url":null,"abstract":"<p>This paper analyzes the fundamental trade-offs that occur in the co-design of pilot resource allocations in orthogonal frequency-division multiplexing signals for both ranging (via time-of-arrival estimation) and communications. These trade-offs are quantified through the Shannon capacity bound, probability of outage, and the Ziv–Zakai bound on range estimation variance. Bounds are derived for signals experiencing frequency-selective Rayleigh block fading, accounting for the impact of limited channel knowledge and multi-antenna reception. Uncompensated carrier frequency offset and phase errors are also factored into the capacity bounds. Analysis based on the derived bounds demonstrates how Pareto-optimal design choices can be made to optimize the communication throughput, probability of outage, and ranging variance. Different pilot resource allocation strategies are then analyzed, showing how Pareto-optimal design choices change depending on the channel.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"3 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139648026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1186/s13634-024-01115-5
Bing Liu, Huanhuan Cheng
This paper proposes a classification method for financial time series that addresses the significant issue of noise. The proposed method combines improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and wavelet threshold de-noising. The method begins by employing ICEEMDAN to decompose the time series into modal components and residuals. Using the noise component verification approach introduced in this paper, these components are categorized into noisy and de-noised elements. The noisy components are then de-noised using the Wavelet Threshold technique, which separates the non-noise and noise elements. The final de-noised output is produced by merging the non-noise elements with the de-noised components, and the 1-NN (nearest neighbor) algorithm is applied for time series classification. Highlighting its practical value in finance, this paper introduces a two-step stock classification prediction method that combines time series classification with a BP (Backpropagation) neural network. The method first classifies stocks into portfolios with high internal similarity using time series classification. It then employs a BP neural network to predict the classification of stock price movements within these portfolios. Backtesting confirms that this approach can enhance the accuracy of predicting stock price fluctuations.
本文提出了一种针对金融时间序列的分类方法,以解决噪声这一重大问题。该方法结合了改进的自适应噪声完全集合经验模式分解(ICEEMDAN)和小波阈值去噪。该方法首先采用 ICEEMDAN 将时间序列分解为模态成分和残差。利用本文介绍的噪声成分验证方法,这些成分被分为噪声成分和去噪成分。然后使用小波阈值技术对噪声成分进行去噪处理,从而分离非噪声和噪声成分。将非噪声成分与去噪成分合并,产生最终的去噪输出,并采用 1-NN(近邻)算法进行时间序列分类。本文介绍了一种将时间序列分类与 BP(反向传播)神经网络相结合的两步股票分类预测方法,突出了其在金融领域的实用价值。该方法首先利用时间序列分类将股票分为具有高度内部相似性的投资组合。然后,它采用 BP 神经网络来预测这些投资组合中股票价格走势的分类。回溯测试证实,这种方法可以提高预测股价波动的准确性。
{"title":"De-noising classification method for financial time series based on ICEEMDAN and wavelet threshold, and its application","authors":"Bing Liu, Huanhuan Cheng","doi":"10.1186/s13634-024-01115-5","DOIUrl":"https://doi.org/10.1186/s13634-024-01115-5","url":null,"abstract":"<p>This paper proposes a classification method for financial time series that addresses the significant issue of noise. The proposed method combines improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and wavelet threshold de-noising. The method begins by employing ICEEMDAN to decompose the time series into modal components and residuals. Using the noise component verification approach introduced in this paper, these components are categorized into noisy and de-noised elements. The noisy components are then de-noised using the Wavelet Threshold technique, which separates the non-noise and noise elements. The final de-noised output is produced by merging the non-noise elements with the de-noised components, and the 1-NN (nearest neighbor) algorithm is applied for time series classification. Highlighting its practical value in finance, this paper introduces a two-step stock classification prediction method that combines time series classification with a BP (Backpropagation) neural network. The method first classifies stocks into portfolios with high internal similarity using time series classification. It then employs a BP neural network to predict the classification of stock price movements within these portfolios. Backtesting confirms that this approach can enhance the accuracy of predicting stock price fluctuations.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"21 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139590185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1186/s13634-024-01111-9
Fangqing Tan, Quanxuan Deng, Qiang Liu
Cell-free massive multiple-input multiple-output (CF-mMIMO) has attracted considerable attention due to its potential for delivering high data rates and energy efficiency (EE). In this paper, we investigate the resource allocation of downlink in CF-mMIMO systems. A hierarchical depth deterministic strategy gradient (H-DDPG) framework is proposed to jointly optimize the access point (AP) clustering and power allocation. The framework uses two-layer control networks operating on different timescales to enhance EE of downlinks in CF-mMIMO systems by cooperatively optimizing AP clustering and power allocation. In this framework, the high-level processing of system-level problems, namely AP clustering, enhances the wireless network configuration by utilizing DDPG on the large timescale while meeting the minimum spectral efficiency (SE) constraints for each user. The low layer solves the link-level sub-problem, that is, power allocation, and reduces interference between APs and improves transmission performance by utilizing DDPG on a small timescale while meeting the maximum transmit power constraint of each AP. Two corresponding DDPG agents are trained separately, allowing them to learn from the environment and gradually improve their policies to maximize the system EE. Numerical results validate the effectiveness of the proposed algorithm in term of its convergence speed, SE, and EE.
{"title":"Energy-efficient access point clustering and power allocation in cell-free massive MIMO networks: a hierarchical deep reinforcement learning approach","authors":"Fangqing Tan, Quanxuan Deng, Qiang Liu","doi":"10.1186/s13634-024-01111-9","DOIUrl":"https://doi.org/10.1186/s13634-024-01111-9","url":null,"abstract":"<p>Cell-free massive multiple-input multiple-output (CF-mMIMO) has attracted considerable attention due to its potential for delivering high data rates and energy efficiency (EE). In this paper, we investigate the resource allocation of downlink in CF-mMIMO systems. A hierarchical depth deterministic strategy gradient (H-DDPG) framework is proposed to jointly optimize the access point (AP) clustering and power allocation. The framework uses two-layer control networks operating on different timescales to enhance EE of downlinks in CF-mMIMO systems by cooperatively optimizing AP clustering and power allocation. In this framework, the high-level processing of system-level problems, namely AP clustering, enhances the wireless network configuration by utilizing DDPG on the large timescale while meeting the minimum spectral efficiency (SE) constraints for each user. The low layer solves the link-level sub-problem, that is, power allocation, and reduces interference between APs and improves transmission performance by utilizing DDPG on a small timescale while meeting the maximum transmit power constraint of each AP. Two corresponding DDPG agents are trained separately, allowing them to learn from the environment and gradually improve their policies to maximize the system EE. Numerical results validate the effectiveness of the proposed algorithm in term of its convergence speed, SE, and EE.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"22 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139585556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-25DOI: 10.1186/s13634-024-01113-7
HongBo Wang, YiZhe Wang, Yu Liu, YueJuan Yao
Electrocardiogram (ECG) prediction is highly important for detecting and storing heart signals and identifying potential health hazards. To improve the duration and accuracy of ECG prediction on the basis of noise filtering, a new algorithm based on variational mode decomposition (VMD) and a convolutional gated recurrent unit (ConvGRU) was proposed, named VMD-ConvGRU. VMD can directly remove noise, such as baseline drift noise, without manual intervention, greatly improving the model usability, and its combination with ConvGRU improves the prediction time and accuracy. The proposed algorithm was compared with three related algorithms (PSR-NN, VMD-NN and TS fuzzy) on MIT-BIH, an internationally recognized arrhythmia database. The experiments showed that the VMD-ConvGRU algorithm not only achieves better prediction accuracy than that of the other three algorithms but also has a considerable advantage in terms of prediction time. In addition, prediction experiments on both the MIT-BIH and European ST-T databases have shown that the VMD-ConvGRU algorithm has better generalizability than the other methods.
{"title":"Electrocardiogram prediction based on variational mode decomposition and a convolutional gated recurrent unit","authors":"HongBo Wang, YiZhe Wang, Yu Liu, YueJuan Yao","doi":"10.1186/s13634-024-01113-7","DOIUrl":"https://doi.org/10.1186/s13634-024-01113-7","url":null,"abstract":"<p>Electrocardiogram (ECG) prediction is highly important for detecting and storing heart signals and identifying potential health hazards. To improve the duration and accuracy of ECG prediction on the basis of noise filtering, a new algorithm based on variational mode decomposition (VMD) and a convolutional gated recurrent unit (ConvGRU) was proposed, named VMD-ConvGRU. VMD can directly remove noise, such as baseline drift noise, without manual intervention, greatly improving the model usability, and its combination with ConvGRU improves the prediction time and accuracy. The proposed algorithm was compared with three related algorithms (PSR-NN, VMD-NN and TS fuzzy) on MIT-BIH, an internationally recognized arrhythmia database. The experiments showed that the VMD-ConvGRU algorithm not only achieves better prediction accuracy than that of the other three algorithms but also has a considerable advantage in terms of prediction time. In addition, prediction experiments on both the MIT-BIH and European ST-T databases have shown that the VMD-ConvGRU algorithm has better generalizability than the other methods.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"65 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-25DOI: 10.1186/s13634-024-01108-4
Nils Damaschke, Volker Kühn, Holger Nobach
Nonparametric estimation of the covariance function and the power spectral density of uniformly spaced data from stationary stochastic processes with missing samples is investigated. Several common methods are tested for their systematic and random errors under the condition of variations in the distribution of the missing samples. In addition to random and independent outliers, the influence of longer and hence correlated data gaps on the performance of the various estimators is also investigated. The aim is to construct a bias-free estimation routine for the covariance function and the power spectral density from stationary stochastic processes under the condition of missing samples with an optimum use of the available information in terms of low estimation variance and mean square error, and that independent of the spectral composition of the data gaps. The proposed procedure is a combination of three methods that allow bias-free estimation of the desired statistical functions with efficient use of the available information: weighted averaging over valid samples, derivation of the covariance estimate for the entire data set and restriction of the domain of the covariance function in a post-processing step, and appropriate correction of the covariance estimate after removal of the estimated mean value. The procedures abstain from interpolation of missing samples as well as block subdivision. Spectral estimates are obtained from covariance functions and vice versa using Wiener–Khinchin’s theorem.
{"title":"Bias-free estimation of the covariance function and the power spectral density from data with missing samples including extended data gaps","authors":"Nils Damaschke, Volker Kühn, Holger Nobach","doi":"10.1186/s13634-024-01108-4","DOIUrl":"https://doi.org/10.1186/s13634-024-01108-4","url":null,"abstract":"<p>Nonparametric estimation of the covariance function and the power spectral density of uniformly spaced data from stationary stochastic processes with missing samples is investigated. Several common methods are tested for their systematic and random errors under the condition of variations in the distribution of the missing samples. In addition to random and independent outliers, the influence of longer and hence correlated data gaps on the performance of the various estimators is also investigated. The aim is to construct a bias-free estimation routine for the covariance function and the power spectral density from stationary stochastic processes under the condition of missing samples with an optimum use of the available information in terms of low estimation variance and mean square error, and that independent of the spectral composition of the data gaps. The proposed procedure is a combination of three methods that allow bias-free estimation of the desired statistical functions with efficient use of the available information: weighted averaging over valid samples, derivation of the covariance estimate for the entire data set and restriction of the domain of the covariance function in a post-processing step, and appropriate correction of the covariance estimate after removal of the estimated mean value. The procedures abstain from interpolation of missing samples as well as block subdivision. Spectral estimates are obtained from covariance functions and vice versa using Wiener–Khinchin’s theorem.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"7 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.1186/s13634-023-01103-1
Abstract
Global Navigation Satellite System (GNSS) is pervasively used in position, navigation, and timing (PNT) applications. As a consequence, important assets have become vulnerable to intentional attacks on GNSS, where of particular relevance is spoofing transmissions that aim at superseding legitimate signals with forged ones in order to control a receiver’s PNT computations. Detecting such attacks is therefore crucial, and this article proposes to employ an algorithm based on deep learning to achieve the task. A data-driven classifier is considered that has two components: a deep learning model that leverages parallelization to reduce its computational complexity and a clustering algorithm that estimates the number and parameters of the spoofing signals. Based on the experimental results, it can be concluded that the proposed scheme exhibits superior performance compared to the existing solutions, especially under moderate-to-high signal-to-noise ratios.
{"title":"Detecting GNSS spoofing using deep learning","authors":"","doi":"10.1186/s13634-023-01103-1","DOIUrl":"https://doi.org/10.1186/s13634-023-01103-1","url":null,"abstract":"<h3>Abstract</h3> <p>Global Navigation Satellite System (GNSS) is pervasively used in position, navigation, and timing (PNT) applications. As a consequence, important assets have become vulnerable to intentional attacks on GNSS, where of particular relevance is spoofing transmissions that aim at superseding legitimate signals with forged ones in order to control a receiver’s PNT computations. Detecting such attacks is therefore crucial, and this article proposes to employ an algorithm based on deep learning to achieve the task. A data-driven classifier is considered that has two components: a deep learning model that leverages parallelization to reduce its computational complexity and a clustering algorithm that estimates the number and parameters of the spoofing signals. Based on the experimental results, it can be concluded that the proposed scheme exhibits superior performance compared to the existing solutions, especially under moderate-to-high signal-to-noise ratios.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"57 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139496561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Point cloud registration is a multifaceted problem that involves a series of procedures. Many deep learning methods employ complex structured networks to achieve robust registration performance. However, these intricate structures can amplify the challenges of network learning and impede gradient propagation. To address this concern, the soft-hard correspondence (SHC) framework is introduced in the present paper to streamline the registration problem. The framework encompasses two modes: the hard correspondence mode, which transforms the registration problem into a correspondence pair search problem, and the soft correspondence mode, which addresses this new problem. The simplification of the problem provides two advantages. First, it eliminates the need for intermediate operations that lead to error fusion and counteraction, thereby improving gradient propagation. Second, a perfect solution is not necessary to solve the new problem, since accurate registration results can be achieved even in the presence of errors in the found pairs. The experimental results demonstrate that SHC successfully simplifies the registration problem. It achieves performance comparable to complex networks using a simple network and can achieve zero error on datasets with perfect correspondence pairs.
{"title":"SHC: soft-hard correspondences framework for simplifying point cloud registration","authors":"Zhaoxiang Chen, Feng Yu, Shuqing Liu, Jiacheng Cao, Zhuohan Xiao, Minghua Jiang","doi":"10.1186/s13634-023-01104-0","DOIUrl":"https://doi.org/10.1186/s13634-023-01104-0","url":null,"abstract":"<p>Point cloud registration is a multifaceted problem that involves a series of procedures. Many deep learning methods employ complex structured networks to achieve robust registration performance. However, these intricate structures can amplify the challenges of network learning and impede gradient propagation. To address this concern, the soft-hard correspondence (SHC) framework is introduced in the present paper to streamline the registration problem. The framework encompasses two modes: the hard correspondence mode, which transforms the registration problem into a correspondence pair search problem, and the soft correspondence mode, which addresses this new problem. The simplification of the problem provides two advantages. First, it eliminates the need for intermediate operations that lead to error fusion and counteraction, thereby improving gradient propagation. Second, a perfect solution is not necessary to solve the new problem, since accurate registration results can be achieved even in the presence of errors in the found pairs. The experimental results demonstrate that SHC successfully simplifies the registration problem. It achieves performance comparable to complex networks using a simple network and can achieve zero error on datasets with perfect correspondence pairs.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"53 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139481414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-11DOI: 10.1186/s13634-023-01107-x
Cuixiang Wang, Shengkai Wu, Xing Shao
In the existing domain adaptation-based bearing fault diagnosis methods, the data difference between the source domain and the target domain is not obvious. Besides, parameters of target domain feature extractor gradually approach that of source domain feature extractor to cheat discriminator which results in similar feature distribution of source domain and target domain. These issues make it difficult for the domain adaptation-based bearing fault diagnosis methods to achieve satisfactory performance. An unsupervised domain adaptive bearing fault diagnosis method based on maximum domain discrepancy (UDA-BFD-MDD) is proposed in this paper. In UDA-BFD-MDD, maximum domain discrepancy is exploited to maximize the feature difference between the source domain and target domain, while the output feature of target domain feature extractor can cheat the discriminator. The performance of UDA-BFD-MDD is verified through comprehensive experiments using the bearing dataset of Case Western Reserve University. The experimental results demonstrate that UDA-BFD-MDD is more stable during training process and can achieve higher accuracy rate.
{"title":"Unsupervised domain adaptive bearing fault diagnosis based on maximum domain discrepancy","authors":"Cuixiang Wang, Shengkai Wu, Xing Shao","doi":"10.1186/s13634-023-01107-x","DOIUrl":"https://doi.org/10.1186/s13634-023-01107-x","url":null,"abstract":"<p>In the existing domain adaptation-based bearing fault diagnosis methods, the data difference between the source domain and the target domain is not obvious. Besides, parameters of target domain feature extractor gradually approach that of source domain feature extractor to cheat discriminator which results in similar feature distribution of source domain and target domain. These issues make it difficult for the domain adaptation-based bearing fault diagnosis methods to achieve satisfactory performance. An unsupervised domain adaptive bearing fault diagnosis method based on maximum domain discrepancy (UDA-BFD-MDD) is proposed in this paper. In UDA-BFD-MDD, maximum domain discrepancy is exploited to maximize the feature difference between the source domain and target domain, while the output feature of target domain feature extractor can cheat the discriminator. The performance of UDA-BFD-MDD is verified through comprehensive experiments using the bearing dataset of Case Western Reserve University. The experimental results demonstrate that UDA-BFD-MDD is more stable during training process and can achieve higher accuracy rate.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"8 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139423902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1186/s13634-023-01106-y
Zhixin Zhao, Yanghang Gong, Huilin Zhou, Yulong Cao
Although some existing sparse representation (SR) methods are robust for target detection in passive bistatic radar (PBR), they still face the challenges of high computational complexity and poor detection performance for extremely low-signal-to-clutter ratio (SCR) target. So, an average effective subcarrier (AES)-domain sparse representation approach is investigated in this paper. Firstly, the AES-based SR model is proposed to solve the problem of high computational complexity, which is established by utilizing the sparseness of the orthogonal frequency-division multiplexing (OFDM) with cyclic prefix (CP) signals in each effective subcarrier domain. Then, considering the difficulty of detecting extremely low-SCR targets, clutter cancellation is implemented by the SR-based optimization model. Two AES-S algorithms, namely AES-S-based clutter cancellation in the time domain (AES-S-T) and AES-S-based clutter cancellation in the subcarrier domain (AES-S-C), are proposed, and the computational complexity is further reduced. Finally, extensive simulation and experimental results illustrate that the proposed algorithms have good detection performance and low computational complexity in PBR detection scene.
尽管现有的一些稀疏表示(SR)方法对于无源双向静态雷达(PBR)中的目标检测具有很强的鲁棒性,但它们仍然面临着计算复杂度高和对极低信号杂波比(SCR)目标检测性能差的挑战。因此,本文研究了一种平均有效子载波(AES)域稀疏表示方法。首先,提出了基于 AES 的 SR 模型来解决计算复杂度高的问题,该模型是利用正交频分复用(OFDM)与循环前缀(CP)信号在每个有效子载波域的稀疏性建立的。然后,考虑到探测极低 SCR 目标的难度,通过基于 SR 的优化模型实现杂波消除。提出了两种 AES-S 算法,即基于 AES-S 的时域杂波消除算法(AES-S-T)和基于 AES-S 的子载波域杂波消除算法(AES-S-C),并进一步降低了计算复杂度。最后,大量的仿真和实验结果表明,所提出的算法在 PBR 检测场景中具有良好的检测性能和较低的计算复杂度。
{"title":"Average effective subcarrier-domain sparse representation approach for target information estimation in CP-OFDM-based passive bistatic radar","authors":"Zhixin Zhao, Yanghang Gong, Huilin Zhou, Yulong Cao","doi":"10.1186/s13634-023-01106-y","DOIUrl":"https://doi.org/10.1186/s13634-023-01106-y","url":null,"abstract":"<p>Although some existing sparse representation (SR) methods are robust for target detection in passive bistatic radar (PBR), they still face the challenges of high computational complexity and poor detection performance for extremely low-signal-to-clutter ratio (SCR) target. So, an average effective subcarrier (AES)-domain sparse representation approach is investigated in this paper. Firstly, the AES-based SR model is proposed to solve the problem of high computational complexity, which is established by utilizing the sparseness of the orthogonal frequency-division multiplexing (OFDM) with cyclic prefix (CP) signals in each effective subcarrier domain. Then, considering the difficulty of detecting extremely low-SCR targets, clutter cancellation is implemented by the SR-based optimization model. Two AES-S algorithms, namely AES-S-based clutter cancellation in the time domain (AES-S-T) and AES-S-based clutter cancellation in the subcarrier domain (AES-S-C), are proposed, and the computational complexity is further reduced. Finally, extensive simulation and experimental results illustrate that the proposed algorithms have good detection performance and low computational complexity in PBR detection scene.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"84 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139415062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1186/s13634-023-01101-3
Yuanyuan Yao, Dengyang Dong, Changjun Cai, Sai Huang, Xin Yuan, Xiaocong Gong
Due to the limited transmission capabilities of terrestrial intelligent devices within the Internet of Remote Things (IoRT), this paper proposes an optimization scheme aimed at enhancing data transmission rate while ensuring communication reliability. This scheme focuses on multi-unmanned aerial vehicle (UAV)-assisted IoRT data communication within the satellite–aerial–terrestrial integrated network (SATIN), which is one of the key technologies for the sixth generation (6G) networks. To optimize the system’s data transmission rate, we introduce a multi-dimensional coverage and power optimization (CPO) algorithm, rooted in the block coordinate descent (BCD) method. This algorithm concurrently optimizes various parameters, including the number and deployment of UAVs, the correlation between IoRT devices and UAVs, and the transmission power of both devices and UAVs. To ensure comprehensive coverage of a large-scale randomly distributed array of terrestrial devices, combined with machine learning algorithm, we present the Dynamic Deployment based on K-means (DDK) algorithm. Additionally, we address the non-convexity challenge in resource allocation for transmission power through variable substitution and the successive convex approximation technique (SCA). Simulation results substantiate the remarkable efficacy of our CPO algorithm, showcasing a maximum 240% improvement in the uplink transmission rate of IoRT data compared to conventional methods.
由于远程物联网(IoRT)中地面智能设备的传输能力有限,本文提出了一种优化方案,旨在提高数据传输速率,同时确保通信可靠性。该方案主要针对第六代(6G)网络的关键技术之一--卫星-空中-地面一体化网络(SATIN)中的多无人机(UAV)辅助 IoRT 数据通信。为了优化系统的数据传输速率,我们引入了一种多维覆盖和功率优化(CPO)算法,该算法根植于块坐标下降(BCD)方法。该算法同时优化了各种参数,包括无人机的数量和部署、IoRT 设备和无人机之间的相关性以及设备和无人机的传输功率。为了确保大规模随机分布的地面设备阵列的全面覆盖,结合机器学习算法,我们提出了基于 K 均值的动态部署(DDK)算法。此外,我们还通过变量替换和连续凸近似技术(SCA)解决了传输功率资源分配中的非凸挑战。仿真结果证明了我们的 CPO 算法的显著功效,与传统方法相比,IoRT 数据的上行链路传输速率最高提高了 240%。
{"title":"Multi-UAV-assisted Internet of Remote Things communication within satellite–aerial–terrestrial integrated network","authors":"Yuanyuan Yao, Dengyang Dong, Changjun Cai, Sai Huang, Xin Yuan, Xiaocong Gong","doi":"10.1186/s13634-023-01101-3","DOIUrl":"https://doi.org/10.1186/s13634-023-01101-3","url":null,"abstract":"<p>Due to the limited transmission capabilities of terrestrial intelligent devices within the Internet of Remote Things (IoRT), this paper proposes an optimization scheme aimed at enhancing data transmission rate while ensuring communication reliability. This scheme focuses on multi-unmanned aerial vehicle (UAV)-assisted IoRT data communication within the satellite–aerial–terrestrial integrated network (SATIN), which is one of the key technologies for the sixth generation (6G) networks. To optimize the system’s data transmission rate, we introduce a multi-dimensional coverage and power optimization (CPO) algorithm, rooted in the block coordinate descent (BCD) method. This algorithm concurrently optimizes various parameters, including the number and deployment of UAVs, the correlation between IoRT devices and UAVs, and the transmission power of both devices and UAVs. To ensure comprehensive coverage of a large-scale randomly distributed array of terrestrial devices, combined with machine learning algorithm, we present the Dynamic Deployment based on <i>K</i>-means (DDK) algorithm. Additionally, we address the non-convexity challenge in resource allocation for transmission power through variable substitution and the successive convex approximation technique (SCA). Simulation results substantiate the remarkable efficacy of our CPO algorithm, showcasing a maximum 240% improvement in the uplink transmission rate of IoRT data compared to conventional methods.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"18 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139415061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}