首页 > 最新文献

Digital Signal Processing最新文献

英文 中文
Learning rule in MFR pulse sequence for behavior mode prediction 用于行为模式预测的 MFR 脉冲序列学习规则
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-07 DOI: 10.1016/j.dsp.2024.104854
Kun Chi , Jun Hu , Liyan Wang , Jihong Shen
Radar behavior prediction is an important task in the field of electronic reconnaissance. For the extensive applied multi-function radar (MFR), which can flexibly transition between various work modes and make certain statistical rule of these radar behaviors exist in the signal sequence. Most of existing radar emission prediction methods are inapplicable to the non-cooperative scenario, since the labeled sequence samples are hard to obtain. To solve this challenge, an unsupervised framework is proposed for learning the behavior rule from the pulse sequence and predicting the radar mode in this paper. The framework includes three modules of sequence segmentation for mode switch boundaries detection, segment clustering for behavior mode recognition, and mode prediction for behavior rule extraction. The application of this framework can predict state and numerical values of next mode at the same time. Experimental results demonstrate that the proposed framework has a considerable prediction performance and shows good robustness under the non-ideal conditions.
雷达行为预测是电子侦察领域的一项重要任务。对于广泛应用的多功能雷达(MFR)来说,它可以在各种工作模式之间灵活转换,并使这些雷达行为在信号序列中存在一定的统计规律。由于标注序列样本难以获得,现有的雷达发射预测方法大多不适用于非合作场景。为解决这一难题,本文提出了一种从脉冲序列中学习行为规则并预测雷达模式的无监督框架。该框架包括三个模块:用于模式切换边界检测的序列分割、用于行为模式识别的序列聚类和用于行为规则提取的模式预测。应用该框架可同时预测下一模式的状态和数值。实验结果表明,所提出的框架具有相当高的预测性能,并在非理想条件下表现出良好的鲁棒性。
{"title":"Learning rule in MFR pulse sequence for behavior mode prediction","authors":"Kun Chi ,&nbsp;Jun Hu ,&nbsp;Liyan Wang ,&nbsp;Jihong Shen","doi":"10.1016/j.dsp.2024.104854","DOIUrl":"10.1016/j.dsp.2024.104854","url":null,"abstract":"<div><div>Radar behavior prediction is an important task in the field of electronic reconnaissance. For the extensive applied multi-function radar (MFR), which can flexibly transition between various work modes and make certain statistical rule of these radar behaviors exist in the signal sequence. Most of existing radar emission prediction methods are inapplicable to the non-cooperative scenario, since the labeled sequence samples are hard to obtain. To solve this challenge, an unsupervised framework is proposed for learning the behavior rule from the pulse sequence and predicting the radar mode in this paper. The framework includes three modules of sequence segmentation for mode switch boundaries detection, segment clustering for behavior mode recognition, and mode prediction for behavior rule extraction. The application of this framework can predict state and numerical values of next mode at the same time. Experimental results demonstrate that the proposed framework has a considerable prediction performance and shows good robustness under the non-ideal conditions.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104854"},"PeriodicalIF":2.9,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An enhanced domain generalization method for object detection based on text guided feature disentanglement 基于文本引导特征分解的增强型目标检测领域泛化方法
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-07 DOI: 10.1016/j.dsp.2024.104855
Meng Wang, Yudong Liu, Haipeng Liu
The application scenarios of object detection models are constantly changing, due to the alternation of day and night and weather changes. Detector often suffers from the scarcity of training sets on potential domains. Recently, this challenge known as domain shift has been relieved by single domain generalization (SDG). To further generalize towards multiple unseen domains, this paper proposes a detector that uses text semantic gaps to enhance scene diversity and utilizes feature disentangling to extract domain-invariant features from different scenes, thereby improving detection accuracy. Firstly, random semantic augmentation (RSA) is adopted leveraging the text modality to capture semantically generalized representations, thereby augmenting the diversity of domain related information. Second, by broadening the decision boundary between domain-invariant and domain-specific features, feature disentangling (FD) branches are applied to improve the detector's object-background differentiation. Additionally, a cross modality alignment (CMA) is performed by estimating the relevances between domain-specific features and textual domain prompts. Experimental results show the proposed detector has excellent performance among existing baselines on diverse weather conditions, such as rainy, foggy and night rainy, which also confirms the enhanced generalization ability on multiple unseen domains.
由于昼夜交替和天气变化,物体检测模型的应用场景不断变化。检测器经常会受到潜在领域训练集稀缺的困扰。最近,单域泛化(SDG)技术缓解了这一被称为 "域转移 "的挑战。为了进一步泛化到多个未见域,本文提出了一种检测器,利用文本语义间隙增强场景多样性,并利用特征分解从不同场景中提取域不变特征,从而提高检测精度。首先,采用随机语义增强(RSA)技术,利用文本模式捕捉语义泛化表征,从而增强领域相关信息的多样性。其次,通过拓宽领域不变特征和特定领域特征之间的决策边界,应用特征分离(FD)分支来提高检测器的物体-背景区分度。此外,通过估计特定领域特征与文本领域提示之间的相关性,还进行了跨模态对齐(CMA)。实验结果表明,在雨天、雾天和夜雨等不同天气条件下,所提出的检测器在现有基线中表现出色,这也证实了其在多个未见领域中增强的泛化能力。
{"title":"An enhanced domain generalization method for object detection based on text guided feature disentanglement","authors":"Meng Wang,&nbsp;Yudong Liu,&nbsp;Haipeng Liu","doi":"10.1016/j.dsp.2024.104855","DOIUrl":"10.1016/j.dsp.2024.104855","url":null,"abstract":"<div><div>The application scenarios of object detection models are constantly changing, due to the alternation of day and night and weather changes. Detector often suffers from the scarcity of training sets on potential domains. Recently, this challenge known as domain shift has been relieved by single domain generalization (SDG). To further generalize towards multiple unseen domains, this paper proposes a detector that uses text semantic gaps to enhance scene diversity and utilizes feature disentangling to extract domain-invariant features from different scenes, thereby improving detection accuracy. Firstly, random semantic augmentation (RSA) is adopted leveraging the text modality to capture semantically generalized representations, thereby augmenting the diversity of domain related information. Second, by broadening the decision boundary between domain-invariant and domain-specific features, feature disentangling (FD) branches are applied to improve the detector's object-background differentiation. Additionally, a cross modality alignment (CMA) is performed by estimating the relevances between domain-specific features and textual domain prompts. Experimental results show the proposed detector has excellent performance among existing baselines on diverse weather conditions, such as rainy, foggy and night rainy, which also confirms the enhanced generalization ability on multiple unseen domains.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104855"},"PeriodicalIF":2.9,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MCNN-CMCA: A multiscale convolutional neural networks with cross-modal channel attention for physiological signal-based mental state recognition MCNN-CMCA:基于生理信号的心理状态识别的跨模态通道关注多尺度卷积神经网络
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-07 DOI: 10.1016/j.dsp.2024.104856
Yayun Wei, Lei Cao, Yilin Dong, Tianyu Liu
Human mental state recognition (MSR) has significant implications for human-machine interactions. Although mental state recognition models based on single-modality signals, such as electroencephalogram (EEG) or peripheral physiological signals (PPS), have achieved encouraging progress, methods leveraging multimodal physiological signals still need to be explored. In this study, we present MCNN-CMCA, a generic model that employs multiscale convolutional neural networks (CNNs) with cross-modal channel attention to realize physiological signals-based MSR. Specifically, we first design an innovative cross-modal channel attention mechanism that adaptively adjusting the weights of each signal channel, effectively learning both intra-modality and inter-modality correlation and expanding the channel information to the depth dimension. Additionally, the study utilizes multiscale temporal CNNs for obtaining short-term and long-term time-frequency features across different modalities. Finally, the multimodal fusion module integrates the representations of all physiological signals and the classification layer implements sparse connections by setting the mask weights to 0. We evaluate the proposed method on the SEED-VIG, DEAP, and self-made datasets, achieving superior results compared to existing state-of-the-art methods. Furthermore, we conduct ablation studies to demonstrate the effectiveness of each component in the MCNN-CMCA and show the use of multimodal physiological signals outperforms single-modality signals.
人类精神状态识别(MSR)对人机交互具有重要影响。尽管基于脑电图(EEG)或外周生理信号(PPS)等单模态信号的心理状态识别模型已经取得了令人鼓舞的进展,但利用多模态生理信号的方法仍有待探索。在本研究中,我们提出了 MCNN-CMCA,这是一种采用跨模态通道关注的多尺度卷积神经网络(CNN)来实现基于生理信号的 MSR 的通用模型。具体来说,我们首先设计了一种创新的跨模态通道关注机制,它能自适应地调整每个信号通道的权重,有效地学习模态内和模态间的相关性,并将通道信息扩展到深度维度。此外,研究还利用多尺度时间 CNN 获取不同模态的短期和长期时间频率特性。最后,多模态融合模块整合了所有生理信号的表征,分类层通过将掩码权重设置为 0 来实现稀疏连接。我们在 SEED-VIG、DEAP 和自制数据集上评估了所提出的方法,结果优于现有的先进方法。此外,我们还进行了消融研究,以证明 MCNN-CMCA 中每个组件的有效性,并表明使用多模态生理信号的效果优于单模态信号。
{"title":"MCNN-CMCA: A multiscale convolutional neural networks with cross-modal channel attention for physiological signal-based mental state recognition","authors":"Yayun Wei,&nbsp;Lei Cao,&nbsp;Yilin Dong,&nbsp;Tianyu Liu","doi":"10.1016/j.dsp.2024.104856","DOIUrl":"10.1016/j.dsp.2024.104856","url":null,"abstract":"<div><div>Human mental state recognition (MSR) has significant implications for human-machine interactions. Although mental state recognition models based on single-modality signals, such as electroencephalogram (EEG) or peripheral physiological signals (PPS), have achieved encouraging progress, methods leveraging multimodal physiological signals still need to be explored. In this study, we present MCNN-CMCA, a generic model that employs multiscale convolutional neural networks (CNNs) with cross-modal channel attention to realize physiological signals-based MSR. Specifically, we first design an innovative cross-modal channel attention mechanism that adaptively adjusting the weights of each signal channel, effectively learning both intra-modality and inter-modality correlation and expanding the channel information to the depth dimension. Additionally, the study utilizes multiscale temporal CNNs for obtaining short-term and long-term time-frequency features across different modalities. Finally, the multimodal fusion module integrates the representations of all physiological signals and the classification layer implements sparse connections by setting the mask weights to 0. We evaluate the proposed method on the SEED-VIG, DEAP, and self-made datasets, achieving superior results compared to existing state-of-the-art methods. Furthermore, we conduct ablation studies to demonstrate the effectiveness of each component in the MCNN-CMCA and show the use of multimodal physiological signals outperforms single-modality signals.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104856"},"PeriodicalIF":2.9,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The short-term wind power prediction based on a multi-layer stacked model of BOCNN-BiGRU-SA 基于 BOCNN-BiGRU-SA 多层堆叠模型的短期风电预测
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-07 DOI: 10.1016/j.dsp.2024.104838
Wen Chen, Hongquan Huang, Xingke Ma, Xinhang Xu, Yi Guan, Guorui Wei, Lin Xiong, Chenglin Zhong, Dejie Chen, Zhonglin Wu
Wind power generation is influenced by various meteorological factors, exhibiting significant volatility and unpredictability. This variability presents considerable challenges for accurate wind power forecasting. In this study, we propose an innovative method for short-term wind power prediction that integrates a Bayesian-optimized Convolutional Neural Network (CNN), Bidirectional Gated Recurrent Units (BiGRU), and a Self-Attention Mechanism (SA) within a multi-layer architecture. Initially, we preprocess features using Pearson correlation analysis and input them into the CNN to investigate complex nonlinear spatial relationships among multiple feature variables and the current load. Subsequently, the BiGRU captures long-term dependencies from both forward and backward time sequences. Finally, we implement the Self-Attention Mechanism to weigh the features and generate the predicted wind power. We optimize the model's numerous hyperparameters utilizing a Bayesian algorithm. Through comparative ablation experiments with varying time segment lengths on wind farm datasets from four regions, our method significantly outperforms 11 models, including Long Short-Term Memory (LSTM), and surpasses several state-of-the-art (SOTA) prediction models, such as iTransformer, PatchTST, Non-stationary Transformers, TSMixer, and DLinear. The highest coefficient of determination (R²) achieved was 0.981, with the Symmetric Mean Absolute Percentage Error (SMAPE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE) decreasing by 11.22 % to 62.04 % compared to other models. The results demonstrate the predictive accuracy and generalization performance of our proposed model.
风力发电受到各种气象因素的影响,表现出极大的不稳定性和不可预测性。这种可变性给风力发电的准确预测带来了巨大挑战。在本研究中,我们提出了一种用于短期风力预测的创新方法,该方法将贝叶斯优化的卷积神经网络(CNN)、双向门控递归单元(BiGRU)和自注意机制(SA)集成到一个多层架构中。首先,我们使用皮尔逊相关分析对特征进行预处理,然后将其输入 CNN,以研究多个特征变量与当前负载之间复杂的非线性空间关系。随后,BiGRU 从正向和反向时间序列中捕捉长期依赖关系。最后,我们采用自我关注机制来权衡特征并生成预测风力发电量。我们利用贝叶斯算法优化了模型的众多超参数。通过对四个地区的风电场数据集进行不同时间段长度的消融对比实验,我们的方法明显优于包括长短期记忆(LSTM)在内的 11 种模型,并超越了 iTransformer、PatchTST、Non-stationary Transformers、TSMixer 和 DLinear 等几种最先进的(SOTA)预测模型。与其他模型相比,对称平均绝对百分比误差 (SMAPE)、均方根误差 (RMSE) 和平均绝对误差 (MAE) 降低了 11.22 % 至 62.04 %。这些结果证明了我们提出的模型的预测准确性和泛化性能。
{"title":"The short-term wind power prediction based on a multi-layer stacked model of BOCNN-BiGRU-SA","authors":"Wen Chen,&nbsp;Hongquan Huang,&nbsp;Xingke Ma,&nbsp;Xinhang Xu,&nbsp;Yi Guan,&nbsp;Guorui Wei,&nbsp;Lin Xiong,&nbsp;Chenglin Zhong,&nbsp;Dejie Chen,&nbsp;Zhonglin Wu","doi":"10.1016/j.dsp.2024.104838","DOIUrl":"10.1016/j.dsp.2024.104838","url":null,"abstract":"<div><div>Wind power generation is influenced by various meteorological factors, exhibiting significant volatility and unpredictability. This variability presents considerable challenges for accurate wind power forecasting. In this study, we propose an innovative method for short-term wind power prediction that integrates a Bayesian-optimized Convolutional Neural Network (CNN), Bidirectional Gated Recurrent Units (BiGRU), and a Self-Attention Mechanism (SA) within a multi-layer architecture. Initially, we preprocess features using Pearson correlation analysis and input them into the CNN to investigate complex nonlinear spatial relationships among multiple feature variables and the current load. Subsequently, the BiGRU captures long-term dependencies from both forward and backward time sequences. Finally, we implement the Self-Attention Mechanism to weigh the features and generate the predicted wind power. We optimize the model's numerous hyperparameters utilizing a Bayesian algorithm. Through comparative ablation experiments with varying time segment lengths on wind farm datasets from four regions, our method significantly outperforms 11 models, including Long Short-Term Memory (LSTM), and surpasses several state-of-the-art (SOTA) prediction models, such as iTransformer, PatchTST, Non-stationary Transformers, TSMixer, and DLinear. The highest coefficient of determination (R²) achieved was 0.981, with the Symmetric Mean Absolute Percentage Error (SMAPE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE) decreasing by 11.22 % to 62.04 % compared to other models. The results demonstrate the predictive accuracy and generalization performance of our proposed model.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104838"},"PeriodicalIF":2.9,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSL-CCRN: Multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion MSL-CCRN:基于多级自监督学习的跨模态对比表示网络,用于红外和可见光图像融合
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-06 DOI: 10.1016/j.dsp.2024.104853
Zhilin Yan , Rencan Nie , Jinde Cao , Guangxu Xie , Zhengze Ding
Infrared and visible image fusion (IVIF) facing different information in two modal scenarios, the focus of research is to better extract different information. In this work, we propose a multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion (MSL-CCRN). Firstly, considering that the scene differences between different modalities affect the fusion of cross-modal images, we propose a contrastive representation network (CRN). CRN enhances the interaction between the fused image and the source image, and significantly improves the similarity between the meaningful features in each modality and the fused image. Secondly, due to the lack of ground truth in IVIF, the quality of directly obtained fused image is seriously affected. We design a multi-stage fusion strategy to address the loss of important information in this process. Notably, our method is a self-supervised network. In fusion stage I, we reconstruct the initial fused image as the new view of fusion stage II. In fusion stage II, we use the fused image obtained in the previous stage to carry out three-view contrastive representation, thereby constraining the feature extraction of the source image. This makes the final fused image introduce more important information in the source image. Through a large number of qualitative, quantitative experiments and downstream object detection experiments, our propose method shows excellent performance compared with most advanced methods.
红外图像与可见光图像融合(IVIF)面临着两种模式下的不同信息,研究的重点是如何更好地提取不同的信息。在这项工作中,我们提出了一种基于多阶段自监督学习的红外与可见光图像融合的跨模态对比表示网络(MSL-CCRN)。首先,考虑到不同模态之间的场景差异会影响跨模态图像的融合,我们提出了一种对比性表示网络(CRN)。CRN 增强了融合图像与源图像之间的交互,并显著提高了各模态有意义特征与融合图像之间的相似性。其次,由于 IVIF 缺乏地面实况,直接获得的融合图像质量受到严重影响。我们设计了一种多阶段融合策略来解决这一过程中重要信息丢失的问题。值得注意的是,我们的方法是一种自监督网络。在融合阶段 I,我们重建初始融合图像作为融合阶段 II 的新视图。在融合阶段 II 中,我们使用前一阶段获得的融合图像进行三视图对比表示,从而约束源图像的特征提取。这使得最终的融合图像引入了源图像中更多的重要信息。通过大量的定性、定量实验和下游物体检测实验,我们提出的方法与大多数先进方法相比表现出了卓越的性能。
{"title":"MSL-CCRN: Multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion","authors":"Zhilin Yan ,&nbsp;Rencan Nie ,&nbsp;Jinde Cao ,&nbsp;Guangxu Xie ,&nbsp;Zhengze Ding","doi":"10.1016/j.dsp.2024.104853","DOIUrl":"10.1016/j.dsp.2024.104853","url":null,"abstract":"<div><div>Infrared and visible image fusion (IVIF) facing different information in two modal scenarios, the focus of research is to better extract different information. In this work, we propose a multi-stage self-supervised learning based cross-modality contrastive representation network for infrared and visible image fusion (MSL-CCRN). Firstly, considering that the scene differences between different modalities affect the fusion of cross-modal images, we propose a contrastive representation network (CRN). CRN enhances the interaction between the fused image and the source image, and significantly improves the similarity between the meaningful features in each modality and the fused image. Secondly, due to the lack of ground truth in IVIF, the quality of directly obtained fused image is seriously affected. We design a multi-stage fusion strategy to address the loss of important information in this process. Notably, our method is a self-supervised network. In fusion stage I, we reconstruct the initial fused image as the new view of fusion stage II. In fusion stage II, we use the fused image obtained in the previous stage to carry out three-view contrastive representation, thereby constraining the feature extraction of the source image. This makes the final fused image introduce more important information in the source image. Through a large number of qualitative, quantitative experiments and downstream object detection experiments, our propose method shows excellent performance compared with most advanced methods.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104853"},"PeriodicalIF":2.9,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous discrete minimum error entropy Kalman filter in non-Gaussian noises system 非高斯噪声系统中的连续离散最小误差熵卡尔曼滤波器
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-31 DOI: 10.1016/j.dsp.2024.104846
Zhifa Liu , Ruide Zhang , Yujie Wang , Haowei Zhang , Gang Wang , Ying Zhang
This paper proposes continuous discrete linear Kalman filtering algorithm based on the minimum error entropy criterion under non-Gaussian noise environments. Traditional Kalman filters struggle in such environments due to their reliance on Gaussian assumptions. Our approach leverages stochastic differential equations to precisely model system dynamics and integrates the minimum error entropy criterion to capture higher-order statistical properties of non-Gaussian noise. Simulations confirm that the proposed algorithm significantly enhances estimation accuracy and robustness compared to conventional methods, demonstrating its effectiveness in handling complex, noisy environments.
本文提出了非高斯噪声环境下基于最小误差熵准则的连续离散线性卡尔曼滤波算法。传统的卡尔曼滤波器由于依赖于高斯假设,在这种环境下很难发挥作用。我们的方法利用随机微分方程对系统动态进行精确建模,并整合了最小误差熵准则,以捕捉非高斯噪声的高阶统计特性。模拟证实,与传统方法相比,所提出的算法大大提高了估计精度和鲁棒性,证明了它在处理复杂、高噪声环境方面的有效性。
{"title":"Continuous discrete minimum error entropy Kalman filter in non-Gaussian noises system","authors":"Zhifa Liu ,&nbsp;Ruide Zhang ,&nbsp;Yujie Wang ,&nbsp;Haowei Zhang ,&nbsp;Gang Wang ,&nbsp;Ying Zhang","doi":"10.1016/j.dsp.2024.104846","DOIUrl":"10.1016/j.dsp.2024.104846","url":null,"abstract":"<div><div>This paper proposes continuous discrete linear Kalman filtering algorithm based on the minimum error entropy criterion under non-Gaussian noise environments. Traditional Kalman filters struggle in such environments due to their reliance on Gaussian assumptions. Our approach leverages stochastic differential equations to precisely model system dynamics and integrates the minimum error entropy criterion to capture higher-order statistical properties of non-Gaussian noise. Simulations confirm that the proposed algorithm significantly enhances estimation accuracy and robustness compared to conventional methods, demonstrating its effectiveness in handling complex, noisy environments.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104846"},"PeriodicalIF":2.9,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable ADMM-CSNet for interrupted sampling repeater jamming suppression 用于抑制中断采样中继器干扰的可解释 ADMM-CSNet
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-31 DOI: 10.1016/j.dsp.2024.104850
Quan Huang , Shaopeng Wei , Lei Zhang
Interrupted sampling repeater jamming (ISRJ) is a category of coherent jamming that greatly influences radars' detection performance. Since the ISRJ has greater power than true targets, ISRJ signals can be removed in the time domain. Due to frequency band loss, grating lobes will be produced if pulse compression (PC) is performed directly, which may generate false targets. Compressive sensing (CS) is an effective method to restore the original PC signal. However, it is challenging for classic CS approaches to manually select the optimization parameters (e.g., penalty parameters, step sizes, etc.) in different ISRJ backgrounds. In this article, a network method based on the Alternating Direction Method of Multipliers (ADMM), named ADMM-CSNet, is introduced to solve the problem. Based on the strong learning capacity of the deep network, all parameters in the ADMM are learned from radar data utilizing back-propagation rather than manually selecting in traditional CS techniques. Compared with classic CS approaches, a higher ISRJ removal signal restoration accuracy is reached faster. Simulation experiments indicate the proposal performs effectively and accurately for ISRJ removal signal reconstruction.
中断采样中继器干扰(ISRJ)是相干干扰的一种,对雷达的探测性能有很大影响。由于 ISRJ 比真实目标的功率更大,因此 ISRJ 信号可以在时域中去除。由于频带损耗,如果直接进行脉冲压缩(PC),就会产生光栅裂片,从而产生假目标。压缩传感(CS)是恢复原始 PC 信号的有效方法。然而,在不同的 ISRJ 背景下,手动选择优化参数(如惩罚参数、步长等)对经典 CS 方法来说是一项挑战。本文引入了一种基于交替方向乘法(ADMM)的网络方法来解决这一问题,命名为 ADMM-CSNet。基于深度网络的强大学习能力,ADMM 中的所有参数都是利用反向传播从雷达数据中学习的,而不是传统 CS 技术中的手动选择。与传统的 CS 方法相比,该方法能更快地达到更高的 ISRJ 消除信号恢复精度。仿真实验表明,该方案能有效、准确地重建 ISRJ 消除信号。
{"title":"Interpretable ADMM-CSNet for interrupted sampling repeater jamming suppression","authors":"Quan Huang ,&nbsp;Shaopeng Wei ,&nbsp;Lei Zhang","doi":"10.1016/j.dsp.2024.104850","DOIUrl":"10.1016/j.dsp.2024.104850","url":null,"abstract":"<div><div>Interrupted sampling repeater jamming (ISRJ) is a category of coherent jamming that greatly influences radars' detection performance. Since the ISRJ has greater power than true targets, ISRJ signals can be removed in the time domain. Due to frequency band loss, grating lobes will be produced if pulse compression (PC) is performed directly, which may generate false targets. Compressive sensing (CS) is an effective method to restore the original PC signal. However, it is challenging for classic CS approaches to manually select the optimization parameters (<em>e.g.</em>, penalty parameters, step sizes, etc.) in different ISRJ backgrounds. In this article, a network method based on the Alternating Direction Method of Multipliers (ADMM), named ADMM-CSNet, is introduced to solve the problem. Based on the strong learning capacity of the deep network, all parameters in the ADMM are learned from radar data utilizing back-propagation rather than manually selecting in traditional CS techniques. Compared with classic CS approaches, a higher ISRJ removal signal restoration accuracy is reached faster. Simulation experiments indicate the proposal performs effectively and accurately for ISRJ removal signal reconstruction.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104850"},"PeriodicalIF":2.9,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-learning based joint multi image super-resolution and sub-pixel registration 基于自学习的多图像联合超分辨率和子像素配准
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-31 DOI: 10.1016/j.dsp.2024.104837
Hansol Kim , Sukho Lee , Moon Gi Kang
Multi Image Super-resolution (MISR) refers to the task of enhancing the spatial resolution of a stack of low-resolution (LR) images representing the same scene. Although many deep learning-based single image super-resolution (SISR) technologies have recently been developed, deep learning has not been widely exploited for MISR, even though it can achieve higher reconstruction accuracy because more information can be extracted from the stack of LR images. One of the primary obstacles encountered by deep networks when addressing the MISR problem is the variability in the number of LR images that act as input to the network. This impedes the feasibility of adopting an end-to-end learning approach, because the varying number of input images makes it difficult to construct a training dataset for the network. Another challenge arises from the requirement to align the LR input images to generate high-resolution (HR) image of high quality, which requires complex and sophisticated methods.
In this paper, we propose a self-learning based method that can simultaneously perform super-resolution and sub-pixel registration of multiple LR images. The proposed method trains a neural network with only the LR images as input and without any true target HR images; i.e., the proposed method requires no extra training dataset. Therefore, it is easy to use the proposed method to deal with different numbers of input images. To our knowledge this is the first time that a neural network is trained using only LR images to perform a joint MISR and sub-pixel registration. Experimental results confirmed that the HR images generated by the proposed method achieved better results in both quantitative and qualitative evaluations than those generated by other deep learning-based methods.
多图像超分辨率(MISR)是指增强代表同一场景的低分辨率(LR)图像堆栈的空间分辨率。尽管最近开发出了许多基于深度学习的单图像超分辨率(SISR)技术,但深度学习尚未被广泛用于 MISR,尽管它可以实现更高的重建精度,因为可以从一叠低分辨率图像中提取更多信息。深度网络在解决 MISR 问题时遇到的主要障碍之一是作为网络输入的 LR 图像数量的不稳定性。这阻碍了采用端到端学习方法的可行性,因为输入图像数量的变化使得网络难以构建训练数据集。本文提出了一种基于自学习的方法,可同时对多幅 LR 图像进行超分辨率和子像素配准。本文提出的方法只将 LR 图像作为输入,而不使用任何真实的目标 HR 图像来训练神经网络;也就是说,本文提出的方法不需要额外的训练数据集。因此,建议的方法很容易处理不同数量的输入图像。据我们所知,这是第一次仅使用 LR 图像来训练神经网络,以执行 MISR 和子像素联合配准。实验结果证实,与其他基于深度学习的方法相比,拟议方法生成的 HR 图像在定量和定性评估方面都取得了更好的结果。
{"title":"Self-learning based joint multi image super-resolution and sub-pixel registration","authors":"Hansol Kim ,&nbsp;Sukho Lee ,&nbsp;Moon Gi Kang","doi":"10.1016/j.dsp.2024.104837","DOIUrl":"10.1016/j.dsp.2024.104837","url":null,"abstract":"<div><div>Multi Image Super-resolution (MISR) refers to the task of enhancing the spatial resolution of a stack of low-resolution (LR) images representing the same scene. Although many deep learning-based single image super-resolution (SISR) technologies have recently been developed, deep learning has not been widely exploited for MISR, even though it can achieve higher reconstruction accuracy because more information can be extracted from the stack of LR images. One of the primary obstacles encountered by deep networks when addressing the MISR problem is the variability in the number of LR images that act as input to the network. This impedes the feasibility of adopting an end-to-end learning approach, because the varying number of input images makes it difficult to construct a training dataset for the network. Another challenge arises from the requirement to align the LR input images to generate high-resolution (HR) image of high quality, which requires complex and sophisticated methods.</div><div>In this paper, we propose a self-learning based method that can simultaneously perform super-resolution and sub-pixel registration of multiple LR images. The proposed method trains a neural network with only the LR images as input and without any true target HR images; i.e., the proposed method requires no extra training dataset. Therefore, it is easy to use the proposed method to deal with different numbers of input images. To our knowledge this is the first time that a neural network is trained using only LR images to perform a joint MISR and sub-pixel registration. Experimental results confirmed that the HR images generated by the proposed method achieved better results in both quantitative and qualitative evaluations than those generated by other deep learning-based methods.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104837"},"PeriodicalIF":2.9,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic mode decomposition-based technique for cross-term suppression in the Wigner-Ville distribution 基于动态模式分解的维格纳-维尔分布交叉项抑制技术
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-29 DOI: 10.1016/j.dsp.2024.104833
Alavala Siva Sankar Reddy, Ram Bilas Pachori
This paper presents a new method for time-frequency representation (TFR) using dynamic mode decomposition (DMD) and Wigner-Ville distribution (WVD), which is termed as DMD-WVD. The proposed method helps in removing cross-term in WVD-based TFR. In the suggested method, the DMD decomposes the multi-component signal into a set of modes where each mode is considered as mono-component signal. The analytic modes of these obtained mono-component signals are computed using the Hilbert transform. The WVD is computed for each analytic mode and added together to obtain cross-term free TFR based on the WVD. The effectiveness of the proposed method for TFR is evaluated using Rényi entropy (RE). Experimental results for synthetic signals namely, multi-component amplitude modulated signal, multi-component linear frequency modulated (LFM) signal, multi-component nonlinear frequency modulated (NLFM) signal, multi-component signal consisting of LFM and NLFM mono-component signal, multi-component signal consisting of sinusoidal and quadratic frequency modulated mono-component signals, and synthetic mechanical bearing fault signal and natural signals namely, electroencephalogram (EEG) and bat echolocation signals are presented in order to show the effectiveness of the proposed method for TFR. It is clear from the results that the proposed method suppresses cross-term effectively as compared to the other existing methods namely, smoothed pseudo WVD (SPWVD), empirical mode decomposition (EMD)-WVD, EMD-SPWVD, variational mode decomposition (VMD)-WVD, VMD-SPWVD, and DMD-SPWVD.
本文提出了一种使用动态模式分解(DMD)和维格纳-维尔分布(WVD)进行时频表示(TFR)的新方法,称为 DMD-WVD。建议的方法有助于消除基于 WVD 的 TFR 中的交叉项。在建议的方法中,DMD 将多分量信号分解为一组模式,其中每个模式都被视为单分量信号。利用希尔伯特变换计算这些单分量信号的解析模式。计算每个解析模式的 WVD 值,并将其相加,以获得基于 WVD 值的无跨期 TFR。利用雷尼熵 (RE) 评估了所提出的 TFR 方法的有效性。实验结果包括合成信号(即多分量幅度调制信号、多分量线性频率调制(LFM)信号、多分量非线性频率调制(NLFM)信号、由 LFM 和 NLFM 单分量信号组成的多分量信号、由正弦和二次频率调制单分量信号组成的多分量信号)、合成机械轴承故障信号以及自然信号(即脑电图(EEG)和蝙蝠回声定位信号),以显示所提方法对 TFR 的有效性。结果表明,与其他现有方法(即平滑伪 WVD(SPWVD)、经验模式分解(EMD)-WVD、EMD-SPWVD、变异模式分解(VMD)-WVD、VMD-SPWVD 和 DMD-SPWVD)相比,拟议方法能有效抑制交叉项。
{"title":"Dynamic mode decomposition-based technique for cross-term suppression in the Wigner-Ville distribution","authors":"Alavala Siva Sankar Reddy,&nbsp;Ram Bilas Pachori","doi":"10.1016/j.dsp.2024.104833","DOIUrl":"10.1016/j.dsp.2024.104833","url":null,"abstract":"<div><div>This paper presents a new method for time-frequency representation (TFR) using dynamic mode decomposition (DMD) and Wigner-Ville distribution (WVD), which is termed as DMD-WVD. The proposed method helps in removing cross-term in WVD-based TFR. In the suggested method, the DMD decomposes the multi-component signal into a set of modes where each mode is considered as mono-component signal. The analytic modes of these obtained mono-component signals are computed using the Hilbert transform. The WVD is computed for each analytic mode and added together to obtain cross-term free TFR based on the WVD. The effectiveness of the proposed method for TFR is evaluated using Rényi entropy (RE). Experimental results for synthetic signals namely, multi-component amplitude modulated signal, multi-component linear frequency modulated (LFM) signal, multi-component nonlinear frequency modulated (NLFM) signal, multi-component signal consisting of LFM and NLFM mono-component signal, multi-component signal consisting of sinusoidal and quadratic frequency modulated mono-component signals, and synthetic mechanical bearing fault signal and natural signals namely, electroencephalogram (EEG) and bat echolocation signals are presented in order to show the effectiveness of the proposed method for TFR. It is clear from the results that the proposed method suppresses cross-term effectively as compared to the other existing methods namely, smoothed pseudo WVD (SPWVD), empirical mode decomposition (EMD)-WVD, EMD-SPWVD, variational mode decomposition (VMD)-WVD, VMD-SPWVD, and DMD-SPWVD.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104833"},"PeriodicalIF":2.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DuINet: A dual-branch network with information exchange and perceptual loss for enhanced image denoising DuINet:用于增强图像去噪的信息交换和感知损失双分支网络
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-28 DOI: 10.1016/j.dsp.2024.104835
Xiaotong Wang , Yibin Tang , Cheng Yao , Yuan Gao , Ying Chen
Image denoising is a fundamental task in image processing and low-level computer vision, often necessitating a delicate balance between noise removal and the preservation of fine details. In recent years, deep learning approaches, particularly those utilizing various neural network architectures, have shown significant promise in addressing this challenge. In this study, we propose DuINet, a novel dual-branch network specifically designed to capture complementary aspects of image information. DuINet integrates an information exchange module that facilitates effective feature sharing between the branches, and it incorporates a perceptual loss function aimed at enhancing the visual quality of the denoised images. Extensive experimental results demonstrate that DuINet surpasses existing dual-branch models and several state-of-the-art convolutional neural network (CNN)-based methods, particularly under conditions of severe noise where preserving fine details and textures is critical. Moreover, DuINet maintains competitive performance in terms of the LPIPS index when compared to deeper or larger networks such as Restormer and MIRNet, underscoring its ability to deliver high visual quality in denoised images.
图像去噪是图像处理和底层计算机视觉中的一项基本任务,通常需要在去除噪声和保留精细细节之间取得微妙的平衡。近年来,深度学习方法,特别是那些利用各种神经网络架构的方法,在应对这一挑战方面显示出了巨大的潜力。在本研究中,我们提出了一种新颖的双分支网络 DuINet,专门用于捕捉图像信息的互补方面。DuINet 集成了一个信息交换模块,可促进各分支之间有效的特征共享,同时还集成了一个感知损失函数,旨在提高去噪图像的视觉质量。广泛的实验结果表明,DuINet 超越了现有的双分支模型和几种最先进的基于卷积神经网络(CNN)的方法,尤其是在保留精细细节和纹理至关重要的严重噪声条件下。此外,与 Restormer 和 MIRNet 等更深层次或更大型的网络相比,DuINet 在 LPIPS 指数方面保持了极具竞争力的性能,突出表明它有能力为去噪图像提供高质量的视觉效果。
{"title":"DuINet: A dual-branch network with information exchange and perceptual loss for enhanced image denoising","authors":"Xiaotong Wang ,&nbsp;Yibin Tang ,&nbsp;Cheng Yao ,&nbsp;Yuan Gao ,&nbsp;Ying Chen","doi":"10.1016/j.dsp.2024.104835","DOIUrl":"10.1016/j.dsp.2024.104835","url":null,"abstract":"<div><div>Image denoising is a fundamental task in image processing and low-level computer vision, often necessitating a delicate balance between noise removal and the preservation of fine details. In recent years, deep learning approaches, particularly those utilizing various neural network architectures, have shown significant promise in addressing this challenge. In this study, we propose DuINet, a novel dual-branch network specifically designed to capture complementary aspects of image information. DuINet integrates an information exchange module that facilitates effective feature sharing between the branches, and it incorporates a perceptual loss function aimed at enhancing the visual quality of the denoised images. Extensive experimental results demonstrate that DuINet surpasses existing dual-branch models and several state-of-the-art convolutional neural network (CNN)-based methods, particularly under conditions of severe noise where preserving fine details and textures is critical. Moreover, DuINet maintains competitive performance in terms of the LPIPS index when compared to deeper or larger networks such as Restormer and MIRNet, underscoring its ability to deliver high visual quality in denoised images.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104835"},"PeriodicalIF":2.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Digital Signal Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1