首页 > 最新文献

Digital Signal Processing最新文献

英文 中文
Adaptive polarimetric persymmetric detection for distributed subspace targets in lognormal texture clutter 对数正态纹理杂波中分布式子空间目标的自适应偏振不对称检测
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-15 DOI: 10.1016/j.dsp.2024.104872
Lichao Liu , Qiang Guo , Yuhang Tian , Mykola Kaliuzhnyi , Vladimir Tuz
In this paper, the adaptive polarimetric persymmetric detection for distributed subspace targets under the background of compound Gaussian clutter is investigated, where the compound Gaussian clutter exhibits texture that follows a lognormal distribution. Based on the two-step Generalized Likelihood Ratio Test (2S GLRT), two-step maximum a posteriori Generalized Likelihood Ratio Test (2S MAP GLRT), two-step Rao (2S Rao) test and two-step Wald (2S Wald) test, we have proposed four polarimetric persymmetric detectors. Initially, we model the target echo as a distributed subspace signal, assuming known clutter texture and polarization speckle covariance matrix (PSCM), and derive the corresponding test statistics. Then, the estimation of the lognormal texture is obtained through maximum a posteriori (MAP). Conventionally, a set of secondary data, which share the same PSCM as the cells under test (CUTs), is assumed to participate in the estimation of the PSCM, leveraging its inherent persymmetric property during the estimation process. Finally, the estimated values are substituted into the proposed test statistics to obtain fully adaptive polarimetric persymmetric detectors. Numerical experimental results using simulated data and measured sea clutter data demonstrate that the proposed four adaptive polarimetric persymmetric detectors exhibit a constant false alarm rate (CFAR) characteristic relative to the PSCM and satisfactory detection performance for distributed subspace targets.
本文研究了在复合高斯杂波背景下分布式子空间目标的自适应偏振持续对称检测,其中复合高斯杂波的纹理服从对数正态分布。基于两步广义似然比检验(2S GLRT)、两步最大后验广义似然比检验(2S MAP GLRT)、两步 Rao 检验(2S Rao)和两步 Wald 检验(2S Wald),我们提出了四种极性不对称检测器。首先,我们将目标回波建模为分布式子空间信号,假设已知杂波纹理和偏振斑点协方差矩阵(PSCM),并推导出相应的测试统计量。然后,通过最大后验法(MAP)对对数正态纹理进行估计。传统上,假设一组与被测单元(CUT)共享相同 PSCM 的辅助数据参与 PSCM 的估计,在估计过程中利用其固有的不对称特性。最后,将估算值代入所提出的测试统计中,就能获得完全自适应的偏振对称检测器。使用模拟数据和测得的海杂波数据得出的数值实验结果表明,相对于 PSCM,所提出的四种自适应偏振非对称探测器具有恒定误报率(CFAR)特性,对分布式子空间目标的探测性能令人满意。
{"title":"Adaptive polarimetric persymmetric detection for distributed subspace targets in lognormal texture clutter","authors":"Lichao Liu ,&nbsp;Qiang Guo ,&nbsp;Yuhang Tian ,&nbsp;Mykola Kaliuzhnyi ,&nbsp;Vladimir Tuz","doi":"10.1016/j.dsp.2024.104872","DOIUrl":"10.1016/j.dsp.2024.104872","url":null,"abstract":"<div><div>In this paper, the adaptive polarimetric persymmetric detection for distributed subspace targets under the background of compound Gaussian clutter is investigated, where the compound Gaussian clutter exhibits texture that follows a lognormal distribution. Based on the two-step Generalized Likelihood Ratio Test (2S GLRT), two-step maximum a posteriori Generalized Likelihood Ratio Test (2S MAP GLRT), two-step Rao (2S Rao) test and two-step Wald (2S Wald) test, we have proposed four polarimetric persymmetric detectors. Initially, we model the target echo as a distributed subspace signal, assuming known clutter texture and polarization speckle covariance matrix (PSCM), and derive the corresponding test statistics. Then, the estimation of the lognormal texture is obtained through maximum a posteriori (MAP). Conventionally, a set of secondary data, which share the same PSCM as the cells under test (CUTs), is assumed to participate in the estimation of the PSCM, leveraging its inherent persymmetric property during the estimation process. Finally, the estimated values are substituted into the proposed test statistics to obtain fully adaptive polarimetric persymmetric detectors. Numerical experimental results using simulated data and measured sea clutter data demonstrate that the proposed four adaptive polarimetric persymmetric detectors exhibit a constant false alarm rate (CFAR) characteristic relative to the PSCM and satisfactory detection performance for distributed subspace targets.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104872"},"PeriodicalIF":2.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MFFR-net: Multi-scale feature fusion and attentive recalibration network for deep neural speech enhancement MFFR-net:用于深度神经语音增强的多尺度特征融合和注意力重新校准网络
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-14 DOI: 10.1016/j.dsp.2024.104870
Nasir Saleem , Sami Bourouis
Deep neural networks (DNNs) have been successfully applied in advancing speech enhancement (SE), particularly in overcoming the challenges posed by nonstationary noisy backgrounds. In this context, multi-scale feature fusion and recalibration (MFFR) can improve speech enhancement performance by combining multi-scale and recalibrated features. This paper proposes a speech enhancement system that capitalizes on a large-scale pre-trained model, seamlessly fused with features attentively recalibrated using varying kernel sizes in convolutional layers. This process enables the SE system to capture features across diverse scales, enhancing its overall performance. The proposed SE system uses a transferable features extractor architecture and integrates with multi-scaled attentively recalibrated features. Utilizing 2D-convolutional layers, the convolutional encoder-decoder extracts both local and contextual features from speech signals. To capture long-term temporal dependencies, a bidirectional simple recurrent unit (BSRU) serves as a bottleneck layer positioned between the encoder and decoder. The experiments are conducted on three publicly available datasets including Texas Instruments/Massachusetts Institute of Technology (TIMIT), LibriSpeech, and Voice Cloning Toolkit+Diverse Environments Multi-channel Acoustic Noise Database (VCTK+DEMAND). The experimental results show that the proposed SE system performs better than several recent approaches on the Short-Time Objective Intelligibility (STOI) and Perceptual Evaluation of Speech Quality (PESQ) evaluation metrics. On the TIMIT dataset, the proposed system showcases a considerable improvement in STOI (17.3%) and PESQ (0.74) over the noisy mixture. The evaluation on the LibriSpeech dataset yields results with a 17.6% and 0.87 improvement in STOI and PESQ.
深度神经网络(DNN)已成功应用于语音增强(SE),尤其是在克服非稳态噪声背景带来的挑战方面。在这种情况下,多尺度特征融合和重新校准(MFFR)可通过结合多尺度和重新校准特征来提高语音增强性能。本文提出了一种语音增强系统,该系统利用大规模预训练模型,与卷积层中使用不同核大小重新校准的特征进行无缝融合。这一过程使 SE 系统能够捕捉不同尺度的特征,从而提高其整体性能。拟议的 SE 系统采用了可转移的特征提取器架构,并与多尺度的专心重新校准特征相结合。利用二维卷积层,卷积编码器-解码器可从语音信号中提取局部和上下文特征。为了捕捉长期的时间依赖性,双向简单递归单元(BSRU)作为瓶颈层位于编码器和解码器之间。实验在三个公开数据集上进行,包括德州仪器/麻省理工学院(TIMIT)、LibriSpeech 和 Voice Cloning Toolkit+Diverse Environments Multi-channel Acoustic Noise Database(VCTK+DEMAND)。实验结果表明,在短时客观可懂度(STOI)和语音质量感知评估(PESQ)评估指标上,所提出的 SE 系统的表现优于最近的几种方法。在 TIMIT 数据集上,建议的系统比噪声混合物的 STOI(17.3%)和 PESQ(0.74)都有显著提高。在 LibriSpeech 数据集上的评估结果显示,STOI 和 PESQ 分别提高了 17.6% 和 0.87%。
{"title":"MFFR-net: Multi-scale feature fusion and attentive recalibration network for deep neural speech enhancement","authors":"Nasir Saleem ,&nbsp;Sami Bourouis","doi":"10.1016/j.dsp.2024.104870","DOIUrl":"10.1016/j.dsp.2024.104870","url":null,"abstract":"<div><div>Deep neural networks (DNNs) have been successfully applied in advancing speech enhancement (SE), particularly in overcoming the challenges posed by nonstationary noisy backgrounds. In this context, multi-scale feature fusion and recalibration (MFFR) can improve speech enhancement performance by combining multi-scale and recalibrated features. This paper proposes a speech enhancement system that capitalizes on a large-scale pre-trained model, seamlessly fused with features attentively recalibrated using varying kernel sizes in convolutional layers. This process enables the SE system to capture features across diverse scales, enhancing its overall performance. The proposed SE system uses a transferable features extractor architecture and integrates with multi-scaled attentively recalibrated features. Utilizing 2D-convolutional layers, the convolutional encoder-decoder extracts both local and contextual features from speech signals. To capture long-term temporal dependencies, a bidirectional simple recurrent unit (BSRU) serves as a bottleneck layer positioned between the encoder and decoder. The experiments are conducted on three publicly available datasets including Texas Instruments/Massachusetts Institute of Technology (TIMIT), LibriSpeech, and Voice Cloning Toolkit+Diverse Environments Multi-channel Acoustic Noise Database (VCTK+DEMAND). The experimental results show that the proposed SE system performs better than several recent approaches on the Short-Time Objective Intelligibility (STOI) and Perceptual Evaluation of Speech Quality (PESQ) evaluation metrics. On the TIMIT dataset, the proposed system showcases a considerable improvement in STOI (17.3%) and PESQ (0.74) over the noisy mixture. The evaluation on the LibriSpeech dataset yields results with a 17.6% and 0.87 improvement in STOI and PESQ.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104870"},"PeriodicalIF":2.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient recurrent real video restoration 高效的循环真实视频修复
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-13 DOI: 10.1016/j.dsp.2024.104851
Antoni Buades, Jose-Luis Lisani
We propose a novel method that addresses the most common limitations of real video sequences, including noise, blur, flicker, and low contrast. This method leverages the Discrete Cosine Transform (DCT) extensively for both deblurring and denoising tasks, ensuring computational efficiency. It also incorporates classical strategies for tonal stabilization and low-light enhancement. To the best of our knowledge, this is the first unified framework that tackles all these problems simultaneously. Compared to state-of-the-art learning-based methods for denoising and deblurring, our approach achieves better results while offering additional benefits such as full interpretability, reduced memory usage, and lighter computational requirements, making it well-suited for integration into mobile device processing chains.
我们提出了一种新方法,可解决真实视频序列中最常见的限制因素,包括噪声、模糊、闪烁和低对比度。该方法广泛利用离散余弦变换(DCT)来完成去模糊和去噪任务,确保了计算效率。此外,它还结合了色调稳定和弱光增强的经典策略。据我们所知,这是第一个同时解决所有这些问题的统一框架。与最先进的基于学习的去噪和去毛刺方法相比,我们的方法取得了更好的效果,同时提供了更多优势,如完全可解释性、减少内存使用和降低计算要求,使其非常适合集成到移动设备处理链中。
{"title":"Efficient recurrent real video restoration","authors":"Antoni Buades,&nbsp;Jose-Luis Lisani","doi":"10.1016/j.dsp.2024.104851","DOIUrl":"10.1016/j.dsp.2024.104851","url":null,"abstract":"<div><div>We propose a novel method that addresses the most common limitations of real video sequences, including noise, blur, flicker, and low contrast. This method leverages the Discrete Cosine Transform (DCT) extensively for both deblurring and denoising tasks, ensuring computational efficiency. It also incorporates classical strategies for tonal stabilization and low-light enhancement. To the best of our knowledge, this is the first unified framework that tackles all these problems simultaneously. Compared to state-of-the-art learning-based methods for denoising and deblurring, our approach achieves better results while offering additional benefits such as full interpretability, reduced memory usage, and lighter computational requirements, making it well-suited for integration into mobile device processing chains.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104851"},"PeriodicalIF":2.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PV-YOLO: A lightweight pedestrian and vehicle detection model based on improved YOLOv8 PV-YOLO:基于改进型 YOLOv8 的轻量级行人和车辆检测模型
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-13 DOI: 10.1016/j.dsp.2024.104857
Yuhang Liu , Zhenghua Huang , Qiong Song , Kun Bai
With the frequent occurrence of urban traffic accidents, fast and accurate detection of pedestrian and vehicle targets has become one of the key technologies for intelligent assisted driving systems. To meet the efficiency and lightweight requirements of smart devices, this paper proposes a lightweight pedestrian and vehicle detection model based on the YOLOv8n model, named PV-YOLO. In the proposed model, receptive-field attention convolution (RFAConv) serves as the backbone network because of its target feature extraction ability, and the neck utilizes the bidirectional feature pyramid network (BiFPN) instead of the original path aggregation network (PANet) to simplify the feature fusion process. Moreover, a lightweight detection head is introduced to reduce the computational burden and improve the overall detection accuracy. In addition, a small target detection layer is designed to improve the accuracy for small distant targets. Finally, to reduce the computational burden further, the lightweight C2f module is utilized to compress the model. The experimental results on the BDD100K and KITTI datasets demonstrate that the proposed PV-YOLO can achieve higher detection accuracy than YOLOv8n and other baseline methods with less model complexity.
随着城市交通事故的频发,快速准确地检测行人和车辆目标已成为智能辅助驾驶系统的关键技术之一。为了满足智能设备高效、轻量的要求,本文提出了一种基于 YOLOv8n 模型的轻量级行人和车辆检测模型,命名为 PV-YOLO。在该模型中,感受野注意卷积(RFAConv)因其目标特征提取能力而成为骨干网络,颈部利用双向特征金字塔网络(BiFPN)代替原始路径聚合网络(PANet),以简化特征融合过程。此外,还引入了轻量级检测头,以减轻计算负担,提高整体检测精度。此外,还设计了一个小目标检测层,以提高对远处小目标的检测精度。最后,为了进一步减轻计算负担,利用轻量级 C2f 模块来压缩模型。在 BDD100K 和 KITTI 数据集上的实验结果表明,与 YOLOv8n 和其他基线方法相比,所提出的 PV-YOLO 能以更低的模型复杂度获得更高的检测精度。
{"title":"PV-YOLO: A lightweight pedestrian and vehicle detection model based on improved YOLOv8","authors":"Yuhang Liu ,&nbsp;Zhenghua Huang ,&nbsp;Qiong Song ,&nbsp;Kun Bai","doi":"10.1016/j.dsp.2024.104857","DOIUrl":"10.1016/j.dsp.2024.104857","url":null,"abstract":"<div><div>With the frequent occurrence of urban traffic accidents, fast and accurate detection of pedestrian and vehicle targets has become one of the key technologies for intelligent assisted driving systems. To meet the efficiency and lightweight requirements of smart devices, this paper proposes a lightweight pedestrian and vehicle detection model based on the YOLOv8n model, named PV-YOLO. In the proposed model, receptive-field attention convolution (RFAConv) serves as the backbone network because of its target feature extraction ability, and the neck utilizes the bidirectional feature pyramid network (BiFPN) instead of the original path aggregation network (PANet) to simplify the feature fusion process. Moreover, a lightweight detection head is introduced to reduce the computational burden and improve the overall detection accuracy. In addition, a small target detection layer is designed to improve the accuracy for small distant targets. Finally, to reduce the computational burden further, the lightweight C2f module is utilized to compress the model. The experimental results on the BDD100K and KITTI datasets demonstrate that the proposed PV-YOLO can achieve higher detection accuracy than YOLOv8n and other baseline methods with less model complexity.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104857"},"PeriodicalIF":2.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video foreground and background separation via Gaussian scale mixture and generalized nuclear norm based robust principal component analysis 通过基于高斯尺度混合物和广义核规范的鲁棒主成分分析进行视频前景和背景分离
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-12 DOI: 10.1016/j.dsp.2024.104863
Yongpeng Yang , Zhenzhen Yang , Jianlin Li
Since one decade, robust principal component analysis (RPCA) has been the most representative problem formulation for video foreground and background separation via decomposing an observed matrix into sparse and low-rank matrices. However, existing RPCA methods still have several major limitations for video foreground and background separation including neglecting impact of noise, low approximation degree for sparse and low-rank function, neglecting spatial-temporal relation of pixels and regularization parameter selection. All these limitations reduce their performance for video foreground and background separation. Consequently, in order to solve the problems of neglecting impact of noise and low approximation accuracy, we first design a novel RPCA method based on Gaussian scale mixture and generalized nuclear norm (GSMGNN), which integrates the Gaussian scale mixture (GSM) and generalized nuclear norm (GNN). Specifically, the GSM can better describe each pixel of foreground in videos via decomposing the foreground to a standardized Gaussian random variable and a positive hidden multiplier. Meanwhile, the GNN can better approximate to the low-rank background. In addition, we extend the GSMGNN method to the robust Gaussian scale mixture and generalized nuclear norm (RGSMGNN) method against noise via inducing the noise item. And the efficient ADMM method is adopted to solve these two proposed methods via breaking them into easier handling smaller pieces. At last, experiments on challenging datasets demonstrate the better effectiveness than many other state-of-the-art methods.
近十年来,鲁棒主成分分析法(RPCA)通过将观测矩阵分解为稀疏低秩矩阵,成为视频前景与背景分离的最具代表性的问题表述方法。然而,现有的 RPCA 方法在视频前景和背景分离方面仍存在一些主要局限性,包括忽略噪声的影响、稀疏和低秩函数的近似度低、忽略像素的时空关系以及正则化参数选择等。所有这些局限性都降低了它们在视频前景和背景分离方面的性能。因此,为了解决忽略噪声影响和近似精度低的问题,我们首先设计了一种基于高斯尺度混合物和广义核规范(GSMGNN)的新型 RPCA 方法,该方法综合了高斯尺度混合物(GSM)和广义核规范(GNN)。具体来说,GSM 通过将前景分解为标准化高斯随机变量和正隐藏乘数,可以更好地描述视频中的每个前景像素。同时,GNN 可以更好地逼近低等级背景。此外,我们还通过诱导噪声项,将 GSMGNN 方法扩展为抗噪声的鲁棒高斯尺度混合和广义核规范(RGSMGNN)方法。此外,我们还采用了高效的 ADMM 方法,通过将这两种方法分解成更易于处理的小块来求解。最后,在具有挑战性的数据集上进行的实验证明,与许多其他最先进的方法相比,这两种方法具有更好的效果。
{"title":"Video foreground and background separation via Gaussian scale mixture and generalized nuclear norm based robust principal component analysis","authors":"Yongpeng Yang ,&nbsp;Zhenzhen Yang ,&nbsp;Jianlin Li","doi":"10.1016/j.dsp.2024.104863","DOIUrl":"10.1016/j.dsp.2024.104863","url":null,"abstract":"<div><div>Since one decade, robust principal component analysis (RPCA) has been the most representative problem formulation for video foreground and background separation via decomposing an observed matrix into sparse and low-rank matrices. However, existing RPCA methods still have several major limitations for video foreground and background separation including neglecting impact of noise, low approximation degree for sparse and low-rank function, neglecting spatial-temporal relation of pixels and regularization parameter selection. All these limitations reduce their performance for video foreground and background separation. Consequently, in order to solve the problems of neglecting impact of noise and low approximation accuracy, we first design a novel RPCA method based on Gaussian scale mixture and generalized nuclear norm (GSMGNN), which integrates the Gaussian scale mixture (GSM) and generalized nuclear norm (GNN). Specifically, the GSM can better describe each pixel of foreground in videos via decomposing the foreground to a standardized Gaussian random variable and a positive hidden multiplier. Meanwhile, the GNN can better approximate to the low-rank background. In addition, we extend the GSMGNN method to the robust Gaussian scale mixture and generalized nuclear norm (RGSMGNN) method against noise via inducing the noise item. And the efficient ADMM method is adopted to solve these two proposed methods via breaking them into easier handling smaller pieces. At last, experiments on challenging datasets demonstrate the better effectiveness than many other state-of-the-art methods.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104863"},"PeriodicalIF":2.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CNN Intelligent diagnosis method for bearing incipient faint faults based on adaptive stochastic resonance-wave peak cross correlation sliding sampling 基于自适应随机共振波峰值交叉相关滑动采样的 CNN 轴承初期故障智能诊断方法
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-12 DOI: 10.1016/j.dsp.2024.104871
Peng Liu , Shuo Zhao , Ludi Kang , Yibing Yin
As a representative of deep learning networks, convolutional neural networks (CNN) have been widely used in bearing fault diagnosis with good results. However, the signal length and segmentation of the input CNN can have a significant impact on diagnostic accuracy. In addition, the signal-to-noise ratio of early bearing faults is usually very low, which makes it difficult for traditional CNNs to accurately identify and classify these faults. To solve this problem, this paper proposes an adaptive stochastic resonance wave peak cross-correlation sliding sampling method. Firstly, the adaptive stochastic resonance is used to reduce the noise of the original signal, and then the data is divided from the position of the signal wave peak, the correlation coefficient between the divided signals is calculated, and the maximum value is found to determine the size of the division window. Finally, it is converted into a 2D image by Gramian Angular Field and input into CNN for diagnostic classification. The design methodology was validated using the Case Western Reserve University bearing dataset. Subsequently, three validation strategies were established on a self-built platform, including mixed diagnosis of 10 different bearing states, variable speed diagnosis, and low sampling data diagnosis. The proposed method outperforms the conventional CNN by 10 % in the Case Western Reserve University dataset test set. The variable speed test set is 24.67 % and 31.17 % higher, respectively. It is 30 % higher in low sampling data diagnosis.
作为深度学习网络的代表,卷积神经网络(CNN)已被广泛应用于轴承故障诊断,并取得了良好的效果。然而,输入 CNN 的信号长度和分割会对诊断准确性产生重大影响。此外,早期轴承故障的信噪比通常很低,这使得传统的 CNN 难以准确识别和分类这些故障。为解决这一问题,本文提出了一种自适应随机共振波峰值交叉相关滑动采样方法。首先,利用自适应随机共振降低原始信号的噪声,然后从信号波峰的位置对数据进行分割,计算分割后的信号之间的相关系数,并找出最大值来确定分割窗口的大小。最后,通过格拉米安角场将其转换为二维图像,并输入 CNN 进行诊断分类。设计方法通过凯斯西储大学轴承数据集进行了验证。随后,在自建平台上建立了三种验证策略,包括 10 种不同轴承状态的混合诊断、变速诊断和低采样数据诊断。在凯斯西储大学数据集测试集上,所提出的方法比传统的 CNN 优胜 10%。在变速测试集中,分别高出 24.67% 和 31.17%。在低采样数据诊断中,则高出 30%。
{"title":"CNN Intelligent diagnosis method for bearing incipient faint faults based on adaptive stochastic resonance-wave peak cross correlation sliding sampling","authors":"Peng Liu ,&nbsp;Shuo Zhao ,&nbsp;Ludi Kang ,&nbsp;Yibing Yin","doi":"10.1016/j.dsp.2024.104871","DOIUrl":"10.1016/j.dsp.2024.104871","url":null,"abstract":"<div><div>As a representative of deep learning networks, convolutional neural networks (CNN) have been widely used in bearing fault diagnosis with good results. However, the signal length and segmentation of the input CNN can have a significant impact on diagnostic accuracy. In addition, the signal-to-noise ratio of early bearing faults is usually very low, which makes it difficult for traditional CNNs to accurately identify and classify these faults. To solve this problem, this paper proposes an adaptive stochastic resonance wave peak cross-correlation sliding sampling method. Firstly, the adaptive stochastic resonance is used to reduce the noise of the original signal, and then the data is divided from the position of the signal wave peak, the correlation coefficient between the divided signals is calculated, and the maximum value is found to determine the size of the division window. Finally, it is converted into a 2D image by Gramian Angular Field and input into CNN for diagnostic classification. The design methodology was validated using the Case Western Reserve University bearing dataset. Subsequently, three validation strategies were established on a self-built platform, including mixed diagnosis of 10 different bearing states, variable speed diagnosis, and low sampling data diagnosis. The proposed method outperforms the conventional CNN by 10 % in the Case Western Reserve University dataset test set. The variable speed test set is 24.67 % and 31.17 % higher, respectively. It is 30 % higher in low sampling data diagnosis.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104871"},"PeriodicalIF":2.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IGGCN: Individual-guided graph convolution network for pedestrian trajectory prediction IGGCN:用于行人轨迹预测的个体引导图卷积网络
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-12 DOI: 10.1016/j.dsp.2024.104862
Wangxing Chen, Haifeng Sang, Jinyu Wang, Zishan Zhao
Accurately predicting the future trajectory of pedestrians is crucial for applications such as autonomous driving and robot navigation. Graph convolution is widely used in trajectory prediction tasks due to its scalability and adaptive feature-learning capabilities. However, there are two problems with pedestrian trajectory prediction methods based on graph convolution: 1. Previous methods struggled to adjust social interactions according to the attributes of different pedestrians, making it difficult to accurately model the relative importance between different pedestrians and others; 2. Previous methods lacked dynamic processing of pedestrian spatial-temporal interaction features to capture high-level spatial-temporal interaction features effectively. Therefore, we propose an Individual-Guided Graph Convolution Network (IGGCN) for pedestrian trajectory prediction. To tackle problem 1, we design an individual-guided interaction module that can adjust pedestrian social interaction modeling according to the pedestrian's attributes, thereby achieving an accurate description of the relative importance of pedestrians. We extend the module to temporal interaction modeling to further achieve an accurate description of the relative importance of time frames. To address problem 2, we design a deformable convolution module to dynamically process spatial-temporal interaction features through deformable convolution kernels, facilitating the capture of high-level spatial-temporal interaction features. We evaluate our method on the ETH, UCY, and SDD datasets. Quantitative analysis shows that our method has lower prediction errors than the current state-of-the-art methods. Qualitative analysis further reveals that our method effectively eliminates the influence of irrelevant pedestrians and accurately models the spatial-temporal interaction relationship of pedestrians.
准确预测行人的未来轨迹对于自动驾驶和机器人导航等应用至关重要。图卷积因其可扩展性和自适应特征学习能力而被广泛应用于轨迹预测任务中。然而,基于图卷积的行人轨迹预测方法存在两个问题:1.以往的方法难以根据不同行人的属性调整社会交往,因此难以准确模拟不同行人与其他人之间的相对重要性;2.以往的方法缺乏对行人时空交互特征的动态处理,无法有效捕捉高层次的时空交互特征。因此,我们提出了一种用于行人轨迹预测的个体引导图卷积网络(IGGCN)。针对问题 1,我们设计了个体引导交互模块,该模块可根据行人的属性调整行人社会交互建模,从而实现对行人相对重要性的准确描述。我们将该模块扩展到时间互动建模,进一步实现对时间框架相对重要性的准确描述。针对问题 2,我们设计了一个可变形卷积模块,通过可变形卷积核动态处理时空交互特征,从而便于捕捉高层次的时空交互特征。我们在 ETH、UCY 和 SDD 数据集上评估了我们的方法。定量分析显示,我们的方法比目前最先进的方法预测误差更小。定性分析进一步表明,我们的方法有效地消除了无关行人的影响,并准确地模拟了行人的时空交互关系。
{"title":"IGGCN: Individual-guided graph convolution network for pedestrian trajectory prediction","authors":"Wangxing Chen,&nbsp;Haifeng Sang,&nbsp;Jinyu Wang,&nbsp;Zishan Zhao","doi":"10.1016/j.dsp.2024.104862","DOIUrl":"10.1016/j.dsp.2024.104862","url":null,"abstract":"<div><div>Accurately predicting the future trajectory of pedestrians is crucial for applications such as autonomous driving and robot navigation. Graph convolution is widely used in trajectory prediction tasks due to its scalability and adaptive feature-learning capabilities. However, there are two problems with pedestrian trajectory prediction methods based on graph convolution: 1. Previous methods struggled to adjust social interactions according to the attributes of different pedestrians, making it difficult to accurately model the relative importance between different pedestrians and others; 2. Previous methods lacked dynamic processing of pedestrian spatial-temporal interaction features to capture high-level spatial-temporal interaction features effectively. Therefore, we propose an Individual-Guided Graph Convolution Network (IGGCN) for pedestrian trajectory prediction. To tackle problem 1, we design an individual-guided interaction module that can adjust pedestrian social interaction modeling according to the pedestrian's attributes, thereby achieving an accurate description of the relative importance of pedestrians. We extend the module to temporal interaction modeling to further achieve an accurate description of the relative importance of time frames. To address problem 2, we design a deformable convolution module to dynamically process spatial-temporal interaction features through deformable convolution kernels, facilitating the capture of high-level spatial-temporal interaction features. We evaluate our method on the ETH, UCY, and SDD datasets. Quantitative analysis shows that our method has lower prediction errors than the current state-of-the-art methods. Qualitative analysis further reveals that our method effectively eliminates the influence of irrelevant pedestrians and accurately models the spatial-temporal interaction relationship of pedestrians.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104862"},"PeriodicalIF":2.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Bayesian learning based multi trajectory tracking algorithm for direction of arrival trajectory estimation 基于稀疏贝叶斯学习的到达方向轨迹估计多轨迹跟踪算法
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-12 DOI: 10.1016/j.dsp.2024.104852
Sahar Barzegari Banadkoki, Mahmoud Ferdosizade Naeiny
One of the applications of sequential sparse signal reconstruction is multi-target Direction of Arrival (DoA) trajectory estimation. In fact, each member of the support set is equivalent to the DoA of a moving target at each time instant. There is a mapping between the indices of the sparse vector and DoA values in continuous angle space. The key idea of this paper is to use the dynamic information of the continuous angular space to more accurately track sparse vectors and estimate the DoA trajectories of moving sources with time-varying acceleration based on the Sparse Bayesian Learning (SBL) framework. For this purpose, the members of the estimated support set are mapped to the continuous angular space at each instant. Then, the obtained DoAs are assigned to the available DoA trajectories using the Predictive-Description-Length (PDL) algorithm. In the following, the DoA of each source is predicted for the next time using the Kalman filter. Finally, the predicted DoAs are mapped to a sparse vector. The obtained sparse vector is used as the prior information for SBL-based sparse reconstruction. Simulation results show that the proposed algorithm, which is called SBL-MTT (Multi Trajectory Tracking), leads to an accurate reconstruction of successive sparse vectors in application of DoA trajectory estimation of moving sources.
顺序稀疏信号重建的应用之一是多目标到达方向(DoA)轨迹估计。事实上,支持集的每个成员都相当于移动目标在每个时间瞬时的到达方向。稀疏向量的索引与连续角度空间中的 DoA 值之间存在映射关系。本文的主要思路是基于稀疏贝叶斯学习(SBL)框架,利用连续角度空间的动态信息更精确地跟踪稀疏向量,并估计具有时变加速度的移动源的 DoA 轨迹。为此,在每个瞬时将估计支持集的成员映射到连续角度空间。然后,利用预测描述长度(PDL)算法将获得的 DoA 分配给可用的 DoA 轨迹。接下来,使用卡尔曼滤波器预测下一次每个信号源的 DoA。最后,将预测的 DoA 映射到稀疏向量中。得到的稀疏向量将作为基于 SBL 的稀疏重建的先验信息。仿真结果表明,所提出的算法(称为 SBL-MTT(多轨迹跟踪))在移动信号源的 DoA 轨迹估计应用中能准确重建连续的稀疏向量。
{"title":"Sparse Bayesian learning based multi trajectory tracking algorithm for direction of arrival trajectory estimation","authors":"Sahar Barzegari Banadkoki,&nbsp;Mahmoud Ferdosizade Naeiny","doi":"10.1016/j.dsp.2024.104852","DOIUrl":"10.1016/j.dsp.2024.104852","url":null,"abstract":"<div><div>One of the applications of sequential sparse signal reconstruction is multi-target Direction of Arrival (DoA) trajectory estimation. In fact, each member of the support set is equivalent to the DoA of a moving target at each time instant. There is a mapping between the indices of the sparse vector and DoA values in continuous angle space. The key idea of this paper is to use the dynamic information of the continuous angular space to more accurately track sparse vectors and estimate the DoA trajectories of moving sources with time-varying acceleration based on the Sparse Bayesian Learning (SBL) framework. For this purpose, the members of the estimated support set are mapped to the continuous angular space at each instant. Then, the obtained DoAs are assigned to the available DoA trajectories using the Predictive-Description-Length (PDL) algorithm. In the following, the DoA of each source is predicted for the next time using the Kalman filter. Finally, the predicted DoAs are mapped to a sparse vector. The obtained sparse vector is used as the prior information for SBL-based sparse reconstruction. Simulation results show that the proposed algorithm, which is called SBL-MTT (Multi Trajectory Tracking), leads to an accurate reconstruction of successive sparse vectors in application of DoA trajectory estimation of moving sources.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104852"},"PeriodicalIF":2.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An enhanced lightweight model for small-scale pedestrian detection based on YOLOv8s 基于 YOLOv8s 的小规模行人检测增强型轻量级模型
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-10 DOI: 10.1016/j.dsp.2024.104866
Feifei Zhang , Lee Vien Leong , Kin Sam Yen , Yana Zhang
Autonomous vehicle scenarios often involve occluded and distant pedestrians, leading to missed and false detections or models that are too large to deploy. To address these issues, this study proposed a lightweight model based on Yolov8s. Feature extraction and fusion networks were redesigned to optimize the detection layer for better detection. The Backbone Network incorporated Dual Conv and ELAN to create the EDLAN module. The EDLAN module and optimized SPPF-LSKA improved the small-scale pedestrian feature extraction in complex backgrounds while reducing the parameters and computation. In Neck Network, BiFPN and VoVGSCSP enhance pedestrian features and improve detection. In addition, the WIoU loss function addressed the target imbalance to enhance generalization ability and overall performance. Enhanced Yolov8s was trained and validated using the CityPersons dataset. Compared to Yolov8s, it improved the precision, recall, F1 score, and mAP@50 by 5.2%, 7.2%, 6.8%, and 6.8%, respectively, while reducing the parameters by 68% and compressing the model size by 67%. The validation experiments were conducted on Caltech and BDD100K datasets. The result demonstrated that precision increased by 3.4% and 1.1%, the mAP@50 also increased by 7.6% and 2.8%, respectively. The modified model reduced the model parameters and size while effectively improving the detection accuracy, making it highly valuable for autonomous driving scenarios.
在自动驾驶汽车的应用场景中,经常会出现行人被遮挡或距离较远的情况,从而导致漏检和误检,或者模型过于庞大而无法部署。为解决这些问题,本研究提出了基于 Yolov8s 的轻量级模型。对特征提取和融合网络进行了重新设计,以优化检测层,提高检测效果。骨干网络整合了 Dual Conv 和 ELAN,创建了 EDLAN 模块。EDLAN 模块和优化的 SPPF-LSKA 改进了复杂背景下的小范围行人特征提取,同时减少了参数和计算量。在 Neck 网络中,BiFPN 和 VoVGSCSP 增强了行人特征并改善了检测。此外,WIoU 损失函数解决了目标不平衡问题,增强了泛化能力和整体性能。使用 CityPersons 数据集对增强型 Yolov8s 进行了训练和验证。与 Yolov8s 相比,它的精确度、召回率、F1 分数和 mAP@50 分别提高了 5.2%、7.2%、6.8% 和 6.8%,同时参数减少了 68%,模型大小压缩了 67%。验证实验在 Caltech 和 BDD100K 数据集上进行。结果表明,精确度分别提高了 3.4% 和 1.1%,mAP@50 也分别提高了 7.6% 和 2.8%。改进后的模型减少了模型参数和体积,同时有效提高了检测精度,在自动驾驶场景中具有很高的应用价值。
{"title":"An enhanced lightweight model for small-scale pedestrian detection based on YOLOv8s","authors":"Feifei Zhang ,&nbsp;Lee Vien Leong ,&nbsp;Kin Sam Yen ,&nbsp;Yana Zhang","doi":"10.1016/j.dsp.2024.104866","DOIUrl":"10.1016/j.dsp.2024.104866","url":null,"abstract":"<div><div>Autonomous vehicle scenarios often involve occluded and distant pedestrians, leading to missed and false detections or models that are too large to deploy. To address these issues, this study proposed a lightweight model based on Yolov8s. Feature extraction and fusion networks were redesigned to optimize the detection layer for better detection. The Backbone Network incorporated Dual Conv and ELAN to create the EDLAN module. The EDLAN module and optimized SPPF-LSKA improved the small-scale pedestrian feature extraction in complex backgrounds while reducing the parameters and computation. In Neck Network, BiFPN and VoVGSCSP enhance pedestrian features and improve detection. In addition, the WIoU loss function addressed the target imbalance to enhance generalization ability and overall performance. Enhanced Yolov8s was trained and validated using the CityPersons dataset. Compared to Yolov8s, it improved the precision, recall, F1 score, and mAP@50 by 5.2%, 7.2%, 6.8%, and 6.8%, respectively, while reducing the parameters by 68% and compressing the model size by 67%. The validation experiments were conducted on Caltech and BDD100K datasets. The result demonstrated that precision increased by 3.4% and 1.1%, the mAP@50 also increased by 7.6% and 2.8%, respectively. The modified model reduced the model parameters and size while effectively improving the detection accuracy, making it highly valuable for autonomous driving scenarios.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104866"},"PeriodicalIF":2.9,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shallow multiplexing and multiscale dilation convolution combined attention based oriented object detection in remote sensing images 基于遥感图像的浅层复用和多尺度扩张卷积组合注意力定向目标检测
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-10 DOI: 10.1016/j.dsp.2024.104865
Jiangtao Wang , Jiawei Shi
Remote sensing images are becoming increasingly important in many areas of life because of the valuable information they provide. However, detecting objects in these images remains a difficult task due to their complex and variable characteristics, such as size, scale, and orientation. Moreover, there is a growing demand for efficient and speedy detection methods in practical applications. Therefore, in this paper, we propose a framework for oriented object detection in remote sensing images based on shallow multiplexing and multiscale dilation convolution combined attention. To achieve a lightweight network structure, we utilize ResNet18 as the backbone network. First, a shallow multiplexing module (SM) is designed to improve the utilization of detailed information in the shallow layer of the network. It enhances the interaction between the shallow and deep layers, resulting in a richer representation of network features. Second, a multiscale dilation convolution combined attention module (MDCA) is proposed to prioritize contextual information by using convolution with different dilation rates. This guides the network to focus more on the object information in remote sensing images. Then, the dilated encoder (DE) is employed at the feature fusion stage to enhance the semantic information of the context and produce a feature map with multiple receptive fields. Finally, the log2 loss function is applied to improve the training results. The experiments are being conducted on three publicly available remote sensing image datasets, and the results demonstrate that the proposed algorithm outperforms other algorithms in terms of detection performance on these datasets. Code is available at https://github.com/sbsfsum/SM-and-MDCA.
遥感图像提供了宝贵的信息,因此在许多生活领域变得越来越重要。然而,由于这些图像具有复杂多变的特征,如大小、比例和方向,因此检测这些图像中的物体仍然是一项艰巨的任务。此外,在实际应用中,对高效、快速检测方法的需求也在不断增长。因此,在本文中,我们提出了一种基于浅层复用和多尺度扩张卷积组合关注的遥感图像定向物体检测框架。为了实现轻量级网络结构,我们使用 ResNet18 作为骨干网络。首先,我们设计了浅层复用模块(SM),以提高网络浅层详细信息的利用率。它增强了浅层和深层之间的互动,从而更丰富地呈现网络特征。其次,提出了多尺度扩张卷积组合注意力模块(MDCA),通过使用不同扩张率的卷积来优先处理上下文信息。这将引导网络更加关注遥感图像中的物体信息。然后,在特征融合阶段采用扩张编码器(DE)来增强上下文的语义信息,并生成具有多个感受野的特征图。最后,应用 log2 损失函数来改善训练结果。实验在三个公开的遥感图像数据集上进行,结果表明,在这些数据集上,拟议算法的检测性能优于其他算法。代码见 https://github.com/sbsfsum/SM-and-MDCA。
{"title":"Shallow multiplexing and multiscale dilation convolution combined attention based oriented object detection in remote sensing images","authors":"Jiangtao Wang ,&nbsp;Jiawei Shi","doi":"10.1016/j.dsp.2024.104865","DOIUrl":"10.1016/j.dsp.2024.104865","url":null,"abstract":"<div><div>Remote sensing images are becoming increasingly important in many areas of life because of the valuable information they provide. However, detecting objects in these images remains a difficult task due to their complex and variable characteristics, such as size, scale, and orientation. Moreover, there is a growing demand for efficient and speedy detection methods in practical applications. Therefore, in this paper, we propose a framework for oriented object detection in remote sensing images based on shallow multiplexing and multiscale dilation convolution combined attention. To achieve a lightweight network structure, we utilize ResNet18 as the backbone network. First, a shallow multiplexing module (SM) is designed to improve the utilization of detailed information in the shallow layer of the network. It enhances the interaction between the shallow and deep layers, resulting in a richer representation of network features. Second, a multiscale dilation convolution combined attention module (MDCA) is proposed to prioritize contextual information by using convolution with different dilation rates. This guides the network to focus more on the object information in remote sensing images. Then, the dilated encoder (DE) is employed at the feature fusion stage to enhance the semantic information of the context and produce a feature map with multiple receptive fields. Finally, the log<sub>2</sub> loss function is applied to improve the training results. The experiments are being conducted on three publicly available remote sensing image datasets, and the results demonstrate that the proposed algorithm outperforms other algorithms in terms of detection performance on these datasets. Code is available at <span><span>https://github.com/sbsfsum/SM-and-MDCA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104865"},"PeriodicalIF":2.9,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Digital Signal Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1