首页 > 最新文献

Digital Signal Processing最新文献

英文 中文
Multimodal enhanced underwater image generation method using flow matching model 基于流量匹配模型的多模态增强水下图像生成方法
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-28 DOI: 10.1016/j.dsp.2026.105964
Haifeng Yu , Changxu Zhu , Ruicheng Zhang , Yankai Feng , Xinbin Li
Underwater Image Enhancement (UIE) methods and Underwater Object Detection (UOD) algorithms are used to monitor the growth of marine aquaculture organisms. However, compared to the original underwater image, image enhancement affects the accuracy of object detection. This paper proposes a Multimodal Enhanced Underwater Image Generation method based on flow matching (MEUIG) to generate enhanced underwater images containing object feature information. Firstly, a dual-branch flow matching model is designed which includes feature extraction branch and image enhancement branch. The feature extraction branch extracts the object feature information in the original underwater images. The enhanced underwater image in the image enhancement branch is achieved through the color-line method. Then, we proposed a fusion module to combine the information of the different modalities. This module fuses multimodal feature information which contains image generated by flow matching, feature information and enhanced image. Additionally, we construct a feature extraction module to extract the object features in the original image. Finally, a new loss function is designed, which considers the pixel movement path, the feature difference between the condition image and the output image and the reconstruction loss. Qualitative and quantitative evaluations show that MEUIG improves image quality while retaining the original information. Our method achieves significantly higher detection accuracy on YOLOv11 compared to existing underwater enhancement methods. In the detection of echinus, MEUIG method is 18.8% and 9.7% higher than the contrast enhancement method, respectively. The code of the MEUIG model and the 4889 dataset used for training the MEUIG model can be found at: https://github.com/Warmth-0213/MEUIG.git. The link of the 5455 underwater objects detection dataset is: https://github.com/Warmth-0213/data1.git.
水下图像增强(UIE)方法和水下目标检测(UOD)算法用于监测海洋水产养殖生物的生长。然而,与原始水下图像相比,图像增强会影响目标检测的精度。提出了一种基于流量匹配的多模态增强水下图像生成方法(MEUIG),用于生成包含目标特征信息的增强水下图像。首先,设计了包含特征提取分支和图像增强分支的双分支流匹配模型;特征提取分支提取原始水下图像中的目标特征信息。在图像增强分支中,水下图像的增强是通过色线法实现的。然后,我们提出了一个融合模块,将不同模态的信息进行融合。该模块融合多模态特征信息,多模态特征信息包含流匹配生成的图像、特征信息和增强图像。此外,我们构建了特征提取模块来提取原始图像中的目标特征。最后,设计了一个新的损失函数,该函数考虑了像素的运动路径、条件图像与输出图像的特征差异以及重建损失。定性和定量评价表明,meig在保留原始信息的同时提高了图像质量。与现有的水下增强方法相比,我们的方法在YOLOv11上实现了更高的检测精度。meig法对棘爪的检测比对比增强法分别高18.8%和9.7%。MEUIG模型的代码和用于训练MEUIG模型的4889数据集可以在:https://github.com/Warmth-0213/MEUIG.git上找到。5455水下目标检测数据集链接为:https://github.com/Warmth-0213/data1.git。
{"title":"Multimodal enhanced underwater image generation method using flow matching model","authors":"Haifeng Yu ,&nbsp;Changxu Zhu ,&nbsp;Ruicheng Zhang ,&nbsp;Yankai Feng ,&nbsp;Xinbin Li","doi":"10.1016/j.dsp.2026.105964","DOIUrl":"10.1016/j.dsp.2026.105964","url":null,"abstract":"<div><div>Underwater Image Enhancement (UIE) methods and Underwater Object Detection (UOD) algorithms are used to monitor the growth of marine aquaculture organisms. However, compared to the original underwater image, image enhancement affects the accuracy of object detection. This paper proposes a Multimodal Enhanced Underwater Image Generation method based on flow matching (MEUIG) to generate enhanced underwater images containing object feature information. Firstly, a dual-branch flow matching model is designed which includes feature extraction branch and image enhancement branch. The feature extraction branch extracts the object feature information in the original underwater images. The enhanced underwater image in the image enhancement branch is achieved through the color-line method. Then, we proposed a fusion module to combine the information of the different modalities. This module fuses multimodal feature information which contains image generated by flow matching, feature information and enhanced image. Additionally, we construct a feature extraction module to extract the object features in the original image. Finally, a new loss function is designed, which considers the pixel movement path, the feature difference between the condition image and the output image and the reconstruction loss. Qualitative and quantitative evaluations show that MEUIG improves image quality while retaining the original information. Our method achieves significantly higher detection accuracy on YOLOv11 compared to existing underwater enhancement methods. In the detection of echinus, MEUIG method is 18.8% and 9.7% higher than the contrast enhancement method, respectively. The code of the MEUIG model and the 4889 dataset used for training the MEUIG model can be found at: <span><span>https://github.com/Warmth-0213/MEUIG.git</span><svg><path></path></svg></span>. The link of the 5455 underwater objects detection dataset is: <span><span>https://github.com/Warmth-0213/data1.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"174 ","pages":"Article 105964"},"PeriodicalIF":3.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced underwater object tracking via adaptive image enhancement and multi-regularized correlation filters 通过自适应图像增强和多正则化相关滤波器增强水下目标跟踪
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-28 DOI: 10.1016/j.dsp.2026.105958
Endong Liu, Lihui Wang
Underwater Object Tracking (UOT) is essential for underwater ecological monitoring, marine resource exploration, and autonomous underwater robotics, yet it remains challenging due to low visibility, illumination variations, visual aberrations, and severe color distortions. To address these issues, this paper proposes a task-driven underwater object tracking framework that tightly integrates selective image enhancement with a multi-regularized correlation filter. Specifically, an adaptive image enhancement strategy derived from the generalized Dark Channel Prior (DCP) is selectively activated using CCF indicators (colorfulness, contrast, and fog density), enabling effective visual enhancement while preserving real-time performance. On this basis, a multi-regularized correlation filter incorporating Gaussian-shaped spatial constraints and channel reliability weighting is formulated to improve robustness and localization accuracy under complex underwater conditions. The resulting optimization problem is efficiently solved within an ADMM framework. Extensive experiments on the UOT100 and UTB180 datasets demonstrate that the proposed method consistently outperforms state-of-the-art trackers, achieving superior precision and success rates in challenging underwater scenarios.
水下目标跟踪(UOT)对于水下生态监测、海洋资源勘探和自主水下机器人至关重要,但由于能见度低、光照变化、视觉像差和严重的颜色失真,它仍然具有挑战性。为了解决这些问题,本文提出了一种任务驱动的水下目标跟踪框架,该框架将选择性图像增强与多正则化相关滤波器紧密结合。具体来说,从广义暗通道先验(DCP)衍生的自适应图像增强策略使用CCF指标(色彩,对比度和雾密度)选择性激活,在保持实时性能的同时实现有效的视觉增强。在此基础上,提出了一种结合高斯形空间约束和信道可靠性加权的多正则化相关滤波器,提高了复杂水下条件下的鲁棒性和定位精度。在ADMM框架内有效地解决了优化问题。在UOT100和UTB180数据集上进行的大量实验表明,所提出的方法始终优于最先进的跟踪器,在具有挑战性的水下场景中实现了更高的精度和成功率。
{"title":"Enhanced underwater object tracking via adaptive image enhancement and multi-regularized correlation filters","authors":"Endong Liu,&nbsp;Lihui Wang","doi":"10.1016/j.dsp.2026.105958","DOIUrl":"10.1016/j.dsp.2026.105958","url":null,"abstract":"<div><div>Underwater Object Tracking (UOT) is essential for underwater ecological monitoring, marine resource exploration, and autonomous underwater robotics, yet it remains challenging due to low visibility, illumination variations, visual aberrations, and severe color distortions. To address these issues, this paper proposes a task-driven underwater object tracking framework that tightly integrates selective image enhancement with a multi-regularized correlation filter. Specifically, an adaptive image enhancement strategy derived from the generalized Dark Channel Prior (DCP) is selectively activated using CCF indicators (colorfulness, contrast, and fog density), enabling effective visual enhancement while preserving real-time performance. On this basis, a multi-regularized correlation filter incorporating Gaussian-shaped spatial constraints and channel reliability weighting is formulated to improve robustness and localization accuracy under complex underwater conditions. The resulting optimization problem is efficiently solved within an ADMM framework. Extensive experiments on the UOT100 and UTB180 datasets demonstrate that the proposed method consistently outperforms state-of-the-art trackers, achieving superior precision and success rates in challenging underwater scenarios.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"174 ","pages":"Article 105958"},"PeriodicalIF":3.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised radar signal sorting with multiview subspace representations and graph learning 基于多视图子空间表示和图学习的半监督雷达信号排序
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-28 DOI: 10.1016/j.dsp.2026.105963
Shuai Huang , Qiang Guo , Yuhang Tian , Hao Feng , Sergey Shulga
In complex electromagnetic environments, radar pulse signals are strongly affected by noise, and limitations of reconnaissance receivers enlarge measurement errors, causing severe pulse missing and the inclusion of numerous spurious pulses. Consequently, pulse sorting faces two key difficulties: mining pulse association relations under missing information, and maintaining inter-class separability under serious parameter feature overlap. We propose a semi-supervised radar signal sorting method based on multiview subspace representation and graph learning (MvSR-GCN-RSS). First, encoders map multiple views into the latent space, where the view-specific and universal self-representation matrices are solved, and pulse sequence adjacency relations are constructed from intrapulse and interpulse information. Then, multiview information complementarity is achieved through a consistency loss and a diversity loss. In contrast to the two-stage process of first graph construction and then spectral clustering, we couple adjacency matrix solving with a graph convolutional network (GCN) in a single end-to-end framework, jointly optimizing it with the parameters of the multiview encoders and decoders to improve sorting efficiency. Finally, we design a multiview joint loss that simultaneously optimizes view reconstruction, GCN-based classification, self-representation solving, and cross-view complementarity for radar signal sorting. Simulation results show that the sorting accuracy reaches 99.99% in ideal scenarios; under scenarios with large measurement errors, pulse missing, and numerous spurious pulses, the proposed method performs far better than the comparison algorithms.
在复杂的电磁环境中,雷达脉冲信号受噪声的影响较大,侦察接收机的局限性扩大了测量误差,造成了严重的脉冲缺失和大量的杂散脉冲。因此,脉冲分类面临两个关键难题:在信息缺失的情况下挖掘脉冲关联关系,在参数特征严重重叠的情况下保持类间可分性。提出了一种基于多视图子空间表示和图学习的半监督雷达信号分选方法(MvSR-GCN-RSS)。首先,编码器将多个视图映射到隐空间中,在隐空间中求解特定于视图和通用的自表示矩阵,并从脉冲内和脉冲间信息构建脉冲序列邻接关系。然后,通过一致性损失和多样性损失实现多视图信息互补。与先图构建再谱聚类的两阶段过程不同,我们将邻接矩阵求解与单个端到端框架中的图卷积网络(GCN)耦合起来,并与多视图编码器和解码器的参数共同优化,以提高排序效率。最后,我们设计了一个多视图联合损失,同时优化了雷达信号分类的视图重建、基于gcn的分类、自表示求解和跨视图互补。仿真结果表明,在理想情况下,分选精度达到99.99%;在测量误差大、脉冲缺失、杂散脉冲多的情况下,该方法的性能远远优于比较算法。
{"title":"Semi-supervised radar signal sorting with multiview subspace representations and graph learning","authors":"Shuai Huang ,&nbsp;Qiang Guo ,&nbsp;Yuhang Tian ,&nbsp;Hao Feng ,&nbsp;Sergey Shulga","doi":"10.1016/j.dsp.2026.105963","DOIUrl":"10.1016/j.dsp.2026.105963","url":null,"abstract":"<div><div>In complex electromagnetic environments, radar pulse signals are strongly affected by noise, and limitations of reconnaissance receivers enlarge measurement errors, causing severe pulse missing and the inclusion of numerous spurious pulses. Consequently, pulse sorting faces two key difficulties: mining pulse association relations under missing information, and maintaining inter-class separability under serious parameter feature overlap. We propose a semi-supervised radar signal sorting method based on multiview subspace representation and graph learning (MvSR-GCN-RSS). First, encoders map multiple views into the latent space, where the view-specific and universal self-representation matrices are solved, and pulse sequence adjacency relations are constructed from intrapulse and interpulse information. Then, multiview information complementarity is achieved through a consistency loss and a diversity loss. In contrast to the two-stage process of first graph construction and then spectral clustering, we couple adjacency matrix solving with a graph convolutional network (GCN) in a single end-to-end framework, jointly optimizing it with the parameters of the multiview encoders and decoders to improve sorting efficiency. Finally, we design a multiview joint loss that simultaneously optimizes view reconstruction, GCN-based classification, self-representation solving, and cross-view complementarity for radar signal sorting. Simulation results show that the sorting accuracy reaches 99.99% in ideal scenarios; under scenarios with large measurement errors, pulse missing, and numerous spurious pulses, the proposed method performs far better than the comparison algorithms.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"174 ","pages":"Article 105963"},"PeriodicalIF":3.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiVOT: Differentiated interaction-guided video-level object tracking DiVOT:差异化交互引导视频级目标跟踪
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-27 DOI: 10.1016/j.dsp.2026.105955
Zhixi Wu, Si Chen, Da-Han Wang, Shunzhi Zhu
Recent advancements in video-level methods have made significant strides in the object tracking field. This method leverages multiple online templates to capture rich temporal information. However, most existing methods treat online templates as equally important as the initial template, overlooking the inherent instability of online templates during updating, which consequently degrades tracking performance. To alleviate this issue, we propose a novel differentiated interaction-guided video-level object tracking method, termed DiVOT, aimed at mitigating the impact of template instability and boosting the tracking performance. Our feature extraction network consists of a differentiated encoder block, which differentially guides the interaction between the search region and various templates, enabling the tracker to achieve a balance between stability and adaptability. Additionally, we design an auxiliary module, i.e., the memory decoder, to compensate for the deficiency of the differentiated interaction, where the latency of online templates hinders the acquisition of the most recent target appearance information. Extensive experiments on six mainstream datasets, i.e., OTB100, GOT-10k, TrackingNet, VOT2020, NFS, and LaSOT, validate the effectiveness of our proposed method.
视频级方法的最新进展在目标跟踪领域取得了重大进展。该方法利用多个在线模板来捕获丰富的时间信息。然而,大多数现有方法将在线模板视为与初始模板同等重要,忽略了在线模板在更新过程中固有的不稳定性,从而降低了跟踪性能。为了解决这个问题,我们提出了一种新的差异化交互引导视频级目标跟踪方法,称为DiVOT,旨在减轻模板不稳定的影响,提高跟踪性能。我们的特征提取网络由一个差异化的编码器块组成,它以差异化的方式引导搜索区域与各种模板之间的交互,使跟踪器在稳定性和适应性之间取得平衡。此外,我们设计了一个辅助模块,即记忆解码器,以弥补差异化交互的不足,其中在线模板的延迟阻碍了获取最新的目标外观信息。在OTB100、GOT-10k、TrackingNet、VOT2020、NFS和LaSOT等6个主流数据集上的大量实验验证了本文方法的有效性。
{"title":"DiVOT: Differentiated interaction-guided video-level object tracking","authors":"Zhixi Wu,&nbsp;Si Chen,&nbsp;Da-Han Wang,&nbsp;Shunzhi Zhu","doi":"10.1016/j.dsp.2026.105955","DOIUrl":"10.1016/j.dsp.2026.105955","url":null,"abstract":"<div><div>Recent advancements in video-level methods have made significant strides in the object tracking field. This method leverages multiple online templates to capture rich temporal information. However, most existing methods treat online templates as equally important as the initial template, overlooking the inherent instability of online templates during updating, which consequently degrades tracking performance. To alleviate this issue, we propose a novel differentiated interaction-guided video-level object tracking method, termed DiVOT, aimed at mitigating the impact of template instability and boosting the tracking performance. Our feature extraction network consists of a differentiated encoder block, which differentially guides the interaction between the search region and various templates, enabling the tracker to achieve a balance between stability and adaptability. Additionally, we design an auxiliary module, i.e., the memory decoder, to compensate for the deficiency of the differentiated interaction, where the latency of online templates hinders the acquisition of the most recent target appearance information. Extensive experiments on six mainstream datasets, i.e., OTB100, GOT-10k, TrackingNet, VOT2020, NFS, and LaSOT, validate the effectiveness of our proposed method.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"174 ","pages":"Article 105955"},"PeriodicalIF":3.0,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMFNet: An efficient multi-scale fusion network for UAV small object detection EMFNet:用于无人机小目标检测的高效多尺度融合网络
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-27 DOI: 10.1016/j.dsp.2026.105952
Mingquan Wang , Huiying Xu , Yiming Sun , Hongbo Li , Zeyu Wang , Yi Li , Ruidong Wang , Xinzhong Zhu
Object detection in UAV aerial images holds significant application value in traffic monitoring, precision agriculture, and other fields. However, this task faces numerous challenges, including significant variations in object sizes, complex background interference, high object density, and class imbalance. Additionally, processing high-resolution aerial images involves disturbances such as uneven lighting and weather variations. To address these challenges, we propose an EMFNet model. This model effectively solves the problems in object detection in drone aerial images by enhancing the response to object areas under different lighting and weather conditions, suppressing interference from complex backgrounds, and improving adaptability to changes in image object size. Specifically, firstly, the lightweight vision transformer architecture RepViT is innovatively used as the backbone of EMFNet, combined with Dual Cross-Stage Partial Attention (DCPA) to optimize multi-scale feature fusion and background suppression, thereby enhancing small object feature extraction under varying lighting and weather conditions. Second, we propose the Context Guided Downsample Block (CGDB) to improve the downsampling process and mitigate feature information loss. Finally, the DyHead detection head utilizing the three-level attention mechanism receives three appropriately located prediction heads for classification and localization, thus improving the detection accuracy of dense and rare objects. Experiments on the VisDrone and UAVDT datasets demonstrate that EMFNet, with 6.76M parameters, achieves AP improvements of 7.5% and 15.2% over the baseline models, respectively.
无人机航拍图像中的目标检测在交通监控、精准农业等领域具有重要的应用价值。然而,这项任务面临着许多挑战,包括对象大小的显著变化、复杂的背景干扰、高对象密度和类不平衡。此外,处理高分辨率航拍图像还涉及光照不均匀和天气变化等干扰。为了应对这些挑战,我们提出了一个EMFNet模型。该模型通过增强对不同光照和天气条件下目标区域的响应,抑制复杂背景的干扰,提高对图像目标尺寸变化的适应性,有效解决了无人机航拍图像中目标检测问题。具体而言,首先,创新性地将轻量级视觉转换架构RepViT作为EMFNet的主干,结合双跨阶段局部注意(Dual Cross-Stage Partial Attention, DCPA)优化多尺度特征融合和背景抑制,从而增强不同光照和天气条件下的小目标特征提取;其次,我们提出了上下文引导下采样块(Context Guided Downsample Block, CGDB)来改进下采样过程,减少特征信息的丢失。最后,利用三级注意机制的DyHead检测头接收三个位置合适的预测头进行分类和定位,从而提高了密集和稀有物体的检测精度。在VisDrone和UAVDT数据集上的实验表明,EMFNet具有676万个参数,比基线模型分别提高了7.5%和15.2%的AP。
{"title":"EMFNet: An efficient multi-scale fusion network for UAV small object detection","authors":"Mingquan Wang ,&nbsp;Huiying Xu ,&nbsp;Yiming Sun ,&nbsp;Hongbo Li ,&nbsp;Zeyu Wang ,&nbsp;Yi Li ,&nbsp;Ruidong Wang ,&nbsp;Xinzhong Zhu","doi":"10.1016/j.dsp.2026.105952","DOIUrl":"10.1016/j.dsp.2026.105952","url":null,"abstract":"<div><div>Object detection in UAV aerial images holds significant application value in traffic monitoring, precision agriculture, and other fields. However, this task faces numerous challenges, including significant variations in object sizes, complex background interference, high object density, and class imbalance. Additionally, processing high-resolution aerial images involves disturbances such as uneven lighting and weather variations. To address these challenges, we propose an EMFNet model. This model effectively solves the problems in object detection in drone aerial images by enhancing the response to object areas under different lighting and weather conditions, suppressing interference from complex backgrounds, and improving adaptability to changes in image object size. Specifically, firstly, the lightweight vision transformer architecture RepViT is innovatively used as the backbone of EMFNet, combined with <strong>D</strong>ual <strong>C</strong>ross-Stage <strong>P</strong>artial <strong>A</strong>ttention (<strong>DCPA</strong>) to optimize multi-scale feature fusion and background suppression, thereby enhancing small object feature extraction under varying lighting and weather conditions. Second, we propose the <strong>C</strong>ontext <strong>G</strong>uided <strong>D</strong>ownsample <strong>B</strong>lock (<strong>CGDB</strong>) to improve the downsampling process and mitigate feature information loss. Finally, the DyHead detection head utilizing the three-level attention mechanism receives three appropriately located prediction heads for classification and localization, thus improving the detection accuracy of dense and rare objects. Experiments on the VisDrone and UAVDT datasets demonstrate that EMFNet, with 6.76M parameters, achieves <em>AP</em> improvements of 7.5% and 15.2% over the baseline models, respectively.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"174 ","pages":"Article 105952"},"PeriodicalIF":3.0,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WCC-Net : Lightweight automatic modulation recognition of integrated underwater acoustic signals WCC-Net:集成水声信号的轻量级自动调制识别
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-27 DOI: 10.1016/j.dsp.2026.105961
Xuerong Cui , Kai Zheng , Juan Li , Lei Li , Bin Jiang
To support the detection and communication requirements of offshore devices operating under stringent resource constraints, it is essential to overcome the challenges posed by complex underwater acoustic channels and intense ocean noise. Consequently, designing a lightweight automatic modulation recognition (AMR) algorithm for integrated underwater acoustic detection and communication signals is particularly challenging. Despite recent advances, current AMR algorithms still exhibit limitations in computational speed and resource usage. Moreover, to date, no AMR method has been specifically designed for integrated acoustic detection and communication (IADC) signal frameworks. To address these issues, this paper proposes a Wavelet Complex Convolution Network (WCC-Net) that directly uses in-phase/quadrature (I/Q) signals as input. First, the in-phase and quadrature components of the signal are each fed into two independent wavelet convolution modules, which simultaneously enlarge the receptive field and suppress noise. Then, a complex convolution module preserves the phase coupling information while efficiently mixing the feature information. Finally, an efficient feature mixing module combines and refines the high-dimensional features to produce the classification result, reducing redundant information and enhancing feature interaction. Experimental results indicate that, at about 89% recognition accuracy, WCC-Net reduces the computational complexity by 84.76% and the number of parameters by 88.82%; under the same model complexity, WCC-Net accuracy is improved by at least 6.91%. Even under real-world ocean noise conditions, WCC-Net attains competitive recognition accuracy with minimal model complexity.
为了支持在严格的资源限制下运行的海上设备的检测和通信需求,必须克服复杂的水声通道和强烈的海洋噪声带来的挑战。因此,设计一种轻量级的自动调制识别(AMR)算法来集成水声探测和通信信号是非常具有挑战性的。尽管最近取得了进展,但目前的AMR算法在计算速度和资源使用方面仍然存在局限性。此外,到目前为止,还没有专门为集成声学探测和通信(IADC)信号框架设计的AMR方法。为了解决这些问题,本文提出了一种直接使用同相/正交(I/Q)信号作为输入的小波复卷积网络(WCC-Net)。首先,将信号的同相分量和正交分量分别送入两个独立的小波卷积模块,同时放大接收野和抑制噪声。复卷积模块在有效混合特征信息的同时,保留了相位耦合信息。最后,利用高效的特征混合模块对高维特征进行组合和提炼,生成分类结果,减少冗余信息,增强特征交互性。实验结果表明,在89%左右的识别准确率下,WCC-Net将计算复杂度降低了84.76%,将参数数量减少了88.82%;在相同的模型复杂度下,WCC-Net的精度至少提高了6.91%。即使在现实世界的海洋噪声条件下,WCC-Net也能以最小的模型复杂性获得具有竞争力的识别精度。
{"title":"WCC-Net : Lightweight automatic modulation recognition of integrated underwater acoustic signals","authors":"Xuerong Cui ,&nbsp;Kai Zheng ,&nbsp;Juan Li ,&nbsp;Lei Li ,&nbsp;Bin Jiang","doi":"10.1016/j.dsp.2026.105961","DOIUrl":"10.1016/j.dsp.2026.105961","url":null,"abstract":"<div><div>To support the detection and communication requirements of offshore devices operating under stringent resource constraints, it is essential to overcome the challenges posed by complex underwater acoustic channels and intense ocean noise. Consequently, designing a lightweight automatic modulation recognition (AMR) algorithm for integrated underwater acoustic detection and communication signals is particularly challenging. Despite recent advances, current AMR algorithms still exhibit limitations in computational speed and resource usage. Moreover, to date, no AMR method has been specifically designed for integrated acoustic detection and communication (IADC) signal frameworks. To address these issues, this paper proposes a Wavelet Complex Convolution Network (WCC-Net) that directly uses in-phase/quadrature (I/Q) signals as input. First, the in-phase and quadrature components of the signal are each fed into two independent wavelet convolution modules, which simultaneously enlarge the receptive field and suppress noise. Then, a complex convolution module preserves the phase coupling information while efficiently mixing the feature information. Finally, an efficient feature mixing module combines and refines the high-dimensional features to produce the classification result, reducing redundant information and enhancing feature interaction. Experimental results indicate that, at about 89% recognition accuracy, WCC-Net reduces the computational complexity by 84.76% and the number of parameters by 88.82%; under the same model complexity, WCC-Net accuracy is improved by at least 6.91%. Even under real-world ocean noise conditions, WCC-Net attains competitive recognition accuracy with minimal model complexity.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"174 ","pages":"Article 105961"},"PeriodicalIF":3.0,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Capturing HDR video in challenging light conditions by beam-splitting ratio variable multi-sensor system 利用分束比可变多传感器系统在恶劣光照条件下捕获HDR视频
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-25 DOI: 10.1016/j.dsp.2026.105956
Zhangchi Qiao , Hongwei Yi , Desheng Wen , Yong Han
Recording video in HDR scenes is challenging because it is always limited by the potential well capacity and sampling rate of the imaging sensor. The essence of this problem is how to balance the relationship between temporal resolution, spatial resolution and dynamic range. To solve this, we designed a variable beam-splitting ratio multi-sensor system (BRVMS) to capture both long and short exposure frames. It consists of a variety of configurations to meet changing light conditions. In addition, we considered motion blur from long exposures before synthesising the HDR frames. We proposed a method to estimate the blur kernel using short exposure frame constraints and add a mask to remove outliers in the overexposed area. Finally, we proposed a match-fusion method based on the two-layer 3D patch (2L3DP) to generate high-quality, detail-rich HDR frames. Extensive experimental results and ablation studies were performed to show the effectiveness of the system. By combining the BRVMS with the 2L3DP match-fusion method, we have enhanced the adaptability and performance of the vision system in high-speed, high-dynamic-range scenes to meet the growing demands of vision applications.
在HDR场景中录制视频具有挑战性,因为它总是受到成像传感器的潜在井容量和采样率的限制。该问题的实质是如何平衡时间分辨率、空间分辨率和动态范围之间的关系。为了解决这个问题,我们设计了一个可变波束分割比多传感器系统(BRVMS)来捕获长曝光和短曝光帧。它由多种配置组成,以满足不断变化的光线条件。此外,在合成HDR帧之前,我们考虑了长时间曝光的运动模糊。我们提出了一种利用短曝光帧约束来估计模糊核的方法,并在过度曝光区域添加蒙版来去除异常值。最后,我们提出了一种基于二层3D补丁(2L3DP)的匹配融合方法,以生成高质量、细节丰富的HDR帧。广泛的实验结果和烧蚀研究表明了该系统的有效性。通过将BRVMS与2L3DP匹配融合方法相结合,增强了视觉系统在高速、高动态范围场景中的适应性和性能,满足了视觉应用日益增长的需求。
{"title":"Capturing HDR video in challenging light conditions by beam-splitting ratio variable multi-sensor system","authors":"Zhangchi Qiao ,&nbsp;Hongwei Yi ,&nbsp;Desheng Wen ,&nbsp;Yong Han","doi":"10.1016/j.dsp.2026.105956","DOIUrl":"10.1016/j.dsp.2026.105956","url":null,"abstract":"<div><div>Recording video in HDR scenes is challenging because it is always limited by the potential well capacity and sampling rate of the imaging sensor. The essence of this problem is how to balance the relationship between temporal resolution, spatial resolution and dynamic range. To solve this, we designed a variable beam-splitting ratio multi-sensor system (BRVMS) to capture both long and short exposure frames. It consists of a variety of configurations to meet changing light conditions. In addition, we considered motion blur from long exposures before synthesising the HDR frames. We proposed a method to estimate the blur kernel using short exposure frame constraints and add a mask to remove outliers in the overexposed area. Finally, we proposed a match-fusion method based on the two-layer 3D patch (2L3DP) to generate high-quality, detail-rich HDR frames. Extensive experimental results and ablation studies were performed to show the effectiveness of the system. By combining the BRVMS with the 2L3DP match-fusion method, we have enhanced the adaptability and performance of the vision system in high-speed, high-dynamic-range scenes to meet the growing demands of vision applications.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"174 ","pages":"Article 105956"},"PeriodicalIF":3.0,"publicationDate":"2026-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LPID-DAFT-YOLOv8: A lightweight high-precision contraband detection framework for X-ray security inspection LPID-DAFT-YOLOv8:用于x射线安全检查的轻型高精度违禁品检测框架
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-24 DOI: 10.1016/j.dsp.2026.105957
Fanyi Kong, Dongming Liu, Dan Shan, Hui Cao
To address the challenge of detecting small, overlapping, and occluded contraband items in complex X-ray security imagery, this paper proposes LPID-DAFT-YOLOv8, a lightweight object detection framework. The framework is designed to improve detection accuracy while maintaining real-time performance. First, a Deformable AIFI Encoder is introduced to replace the original SPPF module in YOLOv8, reducing computational overhead while enhancing semantic feature representation. Second, a Cross-Scale Fourier Convolution (CSFC) module is designed to improve multi-scale feature modeling. The CSFC integrates Multi-order Fractional Fourier Convolution (MFRFC) to jointly capture spatial structures and frequency-domain information. Third, an Inner-IoU loss function is adopted to adapt the bounding box regression scale according to IoU values, with the goal of localization accuracy and robustness. The proposed LPID-DAFT-YOLOv8 is evaluated under identical training conditions on a custom dual-energy X-ray dataset consisting of 20,000 annotated pseudo-colored images. The model achieves a mean Average Precision (mAP50) of 96.7% with an inference speed of 172.8 FPS. Comparative experiments indicate that LPID-DAFT-YOLOv8 achieves a balance between detection accuracy and inference efficiency, supporting its application in real-time contraband detection for high-throughput security screening scenarios.
为了解决在复杂的x射线安全图像中检测小的、重叠的和封闭的违禁品的挑战,本文提出了lvid - daft - yolov8,一个轻量级的物体检测框架。该框架旨在提高检测精度,同时保持实时性能。首先,在YOLOv8中引入了一个可变形的AIFI编码器来取代原有的SPPF模块,减少了计算开销,同时增强了语义特征表示。其次,设计了跨尺度傅里叶卷积(CSFC)模块,改进了多尺度特征建模;CSFC集成了多阶分数阶傅立叶卷积(MFRFC),共同捕获空间结构和频域信息。第三,采用Inner-IoU损失函数根据IoU值调整边界盒回归尺度,以保证定位精度和鲁棒性。在由20,000张带注释的伪彩色图像组成的自定义双能x射线数据集上,在相同的训练条件下对所提出的LPID-DAFT-YOLOv8进行了评估。该模型的平均精度(mAP50)为96.7%,推理速度为172.8 FPS。对比实验表明,LPID-DAFT-YOLOv8在检测精度和推理效率之间取得了平衡,支持其在高通量安检场景下的实时违禁品检测中应用。
{"title":"LPID-DAFT-YOLOv8: A lightweight high-precision contraband detection framework for X-ray security inspection","authors":"Fanyi Kong,&nbsp;Dongming Liu,&nbsp;Dan Shan,&nbsp;Hui Cao","doi":"10.1016/j.dsp.2026.105957","DOIUrl":"10.1016/j.dsp.2026.105957","url":null,"abstract":"<div><div>To address the challenge of detecting small, overlapping, and occluded contraband items in complex X-ray security imagery, this paper proposes LPID-DAFT-YOLOv8, a lightweight object detection framework. The framework is designed to improve detection accuracy while maintaining real-time performance. First, a Deformable AIFI Encoder is introduced to replace the original SPPF module in YOLOv8, reducing computational overhead while enhancing semantic feature representation. Second, a Cross-Scale Fourier Convolution (CSFC) module is designed to improve multi-scale feature modeling. The CSFC integrates Multi-order Fractional Fourier Convolution (MFRFC) to jointly capture spatial structures and frequency-domain information. Third, an Inner-IoU loss function is adopted to adapt the bounding box regression scale according to IoU values, with the goal of localization accuracy and robustness. The proposed LPID-DAFT-YOLOv8 is evaluated under identical training conditions on a custom dual-energy X-ray dataset consisting of 20,000 annotated pseudo-colored images. The model achieves a mean Average Precision (mAP50) of 96.7% with an inference speed of 172.8 FPS. Comparative experiments indicate that LPID-DAFT-YOLOv8 achieves a balance between detection accuracy and inference efficiency, supporting its application in real-time contraband detection for high-throughput security screening scenarios.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"173 ","pages":"Article 105957"},"PeriodicalIF":3.0,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A communication signal recognition method based on multi-scale feature fusion 基于多尺度特征融合的通信信号识别方法
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-24 DOI: 10.1016/j.dsp.2026.105950
Yaoyi He , An Gong , Yunlu Ge , Xiaolei Zhao , Ning Ding
Communication signal recognition is a critical technology for ensuring the security and intelligent management of wireless communication systems, with broad applications in spectrum monitoring, electronic warfare, unmanned communication, and cognitive radio. Traditional neural networks often struggle to extract signal features across different scales, leading to low recognition accuracy. This paper introduces a new model designed to solve this issue by fusing multi-scale features. The model uses a dual-branch architecture. One branch employs the Discrete Wavelet Transform (DWT) to capture features from both low and high signal frequencies. The second branch is a Bidirectional Long Short-Term Memory (BiLSTM) network that extracts temporal patterns. A gating mechanism, a bidirectional structure, and a global timestep attention mechanism all enhance the BiLSTM module’s performance. Finally, the system combines these distinct features to enable effective signal detection and recognition. Tests conducted with the Panoradio HF dataset confirm our model’s capabilities. Our proposed method attained an average recognition accuracy of 79.52%, which surpasses competing baseline models by 4.51%.
通信信号识别是确保无线通信系统安全和智能管理的关键技术,在频谱监测、电子战、无人通信和认知无线电等领域有着广泛的应用。传统的神经网络往往难以提取不同尺度的信号特征,导致识别精度较低。本文提出了一种融合多尺度特征的模型来解决这一问题。该模型使用双分支架构。一个分支采用离散小波变换(DWT)来捕获低频率和高频率信号的特征。第二个分支是提取时间模式的双向长短期记忆(BiLSTM)网络。门控机制、双向结构和全局时间步长注意机制都提高了模块的性能。最后,系统将这些不同的特征结合起来,实现有效的信号检测和识别。使用Panoradio HF数据集进行的测试证实了我们模型的能力。我们提出的方法平均识别准确率为79.52%,比竞争基准模型高出4.51%。
{"title":"A communication signal recognition method based on multi-scale feature fusion","authors":"Yaoyi He ,&nbsp;An Gong ,&nbsp;Yunlu Ge ,&nbsp;Xiaolei Zhao ,&nbsp;Ning Ding","doi":"10.1016/j.dsp.2026.105950","DOIUrl":"10.1016/j.dsp.2026.105950","url":null,"abstract":"<div><div>Communication signal recognition is a critical technology for ensuring the security and intelligent management of wireless communication systems, with broad applications in spectrum monitoring, electronic warfare, unmanned communication, and cognitive radio. Traditional neural networks often struggle to extract signal features across different scales, leading to low recognition accuracy. This paper introduces a new model designed to solve this issue by fusing multi-scale features. The model uses a dual-branch architecture. One branch employs the Discrete Wavelet Transform (DWT) to capture features from both low and high signal frequencies. The second branch is a Bidirectional Long Short-Term Memory (BiLSTM) network that extracts temporal patterns. A gating mechanism, a bidirectional structure, and a global timestep attention mechanism all enhance the BiLSTM module’s performance. Finally, the system combines these distinct features to enable effective signal detection and recognition. Tests conducted with the Panoradio HF dataset confirm our model’s capabilities. Our proposed method attained an average recognition accuracy of 79.52%, which surpasses competing baseline models by 4.51%.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"173 ","pages":"Article 105950"},"PeriodicalIF":3.0,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced feature fusion and detail-Preserving network for small object detection in medical microscopic images 基于增强特征融合和细节保留网络的医学显微图像小目标检测
IF 3 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-23 DOI: 10.1016/j.dsp.2026.105938
Runtian Zheng, Congpeng Zhang, Ying Liu
Accurately detecting tiny targets in microscopic images is critical for tuberculosis screening yet remains difficult due to large shape variation, dense instances with weak semantics, and cluttered backgrounds. We curate a Mycobacterium tuberculosis dataset of 5,842 microscopic images and present EFDNet, an Enhanced Feature Fusion and Detail-Preserving detector. EFDNet combines an Adaptive Feature Enhancement module that dynamically shifts convolutional sampling to capture irregular, fine-grained patterns, a Cross-Stage Enhanced Feature Pyramid Network that fuses semantic and localization cues across scales to withstand crowding and background clutter, and a lightweight shared Detail-Enhanced detection head that preserves high-frequency structure through differential convolutions and shared parameters, together with a Normalized Wasserstein Distance loss that reduces localization sensitivity for small boxes. On our dataset, the Tuberculosis-Phonecamera dataset, and the cross-domain BBBC041 blood-cell benchmark, EFDNet achieves AP50 of 81.9%, 87.6%, and 95.2%, outperforming a strong baseline by +5.7, +3.2, and +3.9 points, respectively, while maintaining low computational cost. These results indicate robust small-object detection under varied microscopy conditions and support the practical utility of EFDNet for automated screening.
准确检测显微图像中的微小目标对于结核病筛查至关重要,但由于形状变化大,语义弱的密集实例和杂乱的背景,仍然很困难。我们整理了一个包含5842张显微图像的结核分枝杆菌数据集,并提出了EFDNet,一种增强的特征融合和细节保留检测器。EFDNet结合了一个自适应特征增强模块,该模块动态移动卷积采样以捕获不规则、细粒度的模式;一个跨阶段增强特征金字塔网络,融合跨尺度的语义和定位线索,以抵御拥挤和背景混乱;一个轻量级共享细节增强检测头,通过微分卷积和共享参数保留高频结构。加上标准化的Wasserstein距离损失,降低了小盒子的定位灵敏度。在我们的数据集(Tuberculosis-Phonecamera数据集)和跨域BBBC041血细胞基准上,EFDNet的AP50分别达到81.9%、87.6%和95.2%,分别比强基线高出+5.7、+3.2和+3.9点,同时保持较低的计算成本。这些结果表明,在不同的显微镜条件下,小物体检测是可靠的,并支持EFDNet在自动筛选中的实际应用。
{"title":"Enhanced feature fusion and detail-Preserving network for small object detection in medical microscopic images","authors":"Runtian Zheng,&nbsp;Congpeng Zhang,&nbsp;Ying Liu","doi":"10.1016/j.dsp.2026.105938","DOIUrl":"10.1016/j.dsp.2026.105938","url":null,"abstract":"<div><div>Accurately detecting tiny targets in microscopic images is critical for tuberculosis screening yet remains difficult due to large shape variation, dense instances with weak semantics, and cluttered backgrounds. We curate a Mycobacterium tuberculosis dataset of 5,842 microscopic images and present EFDNet, an Enhanced Feature Fusion and Detail-Preserving detector. EFDNet combines an Adaptive Feature Enhancement module that dynamically shifts convolutional sampling to capture irregular, fine-grained patterns, a Cross-Stage Enhanced Feature Pyramid Network that fuses semantic and localization cues across scales to withstand crowding and background clutter, and a lightweight shared Detail-Enhanced detection head that preserves high-frequency structure through differential convolutions and shared parameters, together with a Normalized Wasserstein Distance loss that reduces localization sensitivity for small boxes. On our dataset, the Tuberculosis-Phonecamera dataset, and the cross-domain BBBC041 blood-cell benchmark, EFDNet achieves <em>AP</em><sub>50</sub> of 81.9%, 87.6%, and 95.2%, outperforming a strong baseline by <span><math><mrow><mo>+</mo><mn>5.7</mn></mrow></math></span>, <span><math><mrow><mo>+</mo><mn>3.2</mn></mrow></math></span>, and <span><math><mrow><mo>+</mo><mn>3.9</mn></mrow></math></span> points, respectively, while maintaining low computational cost. These results indicate robust small-object detection under varied microscopy conditions and support the practical utility of EFDNet for automated screening.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"174 ","pages":"Article 105938"},"PeriodicalIF":3.0,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Digital Signal Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1