首页 > 最新文献

Digital Signal Processing最新文献

英文 中文
ALFusion: Adaptive fusion for infrared and visible images under complex lighting conditions ALFusion:复杂照明条件下红外和可见光图像的自适应融合
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-17 DOI: 10.1016/j.dsp.2024.104864
Hanlin Xu , Gang Liu , Yao Qian , Xiangbo Zhang , Durga Prasad Bavirisetti
In the task of infrared and visible image fusion, source images often exhibit complex and variable characteristics due to scene illumination. To address the challenges posed by complex lighting conditions and enhance the quality of fused images, we develop the ALFusion method, which combines dynamic convolution and Transformer for infrared and visible image fusion. The core idea is to utilize the adaptability of dynamic convolution coupled with the superior long-range modeling capabilities of the Transformer to design a hybrid feature extractor. This extractor dynamically and comprehensively captures features from source images under various lighting conditions. To enhance the effectiveness of mixed features, we integrate an elaborate multi-scale attention enhancement module into the skip connections of the U-Net architecture. In this module, we use convolutional kernels of various sizes to expand the receptive field and incorporate an attention mechanism to enhance and highlight the combined use of dynamic convolution and the Transformer. Considering the performance in advanced visual tasks, an illumination detection network is integrated into the loss function, strategically balancing the pixel fusion ratio and optimizing visual impacts under varied lighting conditions, leveraging information from different source images. The fusion results indicate that our method consistently delivers superior background contrast and enhanced texture and structural features across diverse lighting conditions. Ablation and complementary experiments have been conducted, further affirming the effectiveness of our proposed method and highlighting its potential in advanced visual tasks.
在红外图像和可见光图像融合任务中,源图像通常会因场景光照而呈现出复杂多变的特征。为了应对复杂光照条件带来的挑战并提高融合图像的质量,我们开发了 ALFusion 方法,该方法结合了动态卷积和变换器,用于红外和可见光图像融合。其核心思想是利用动态卷积的适应性和变换器卓越的远距离建模能力来设计一种混合特征提取器。该提取器可在各种光照条件下动态、全面地捕捉源图像的特征。为了提高混合特征的有效性,我们在 U-Net 架构的跳转连接中集成了一个精心设计的多尺度注意力增强模块。在该模块中,我们使用不同大小的卷积核来扩展感受野,并结合注意力机制来增强和突出动态卷积和变换器的结合使用。考虑到高级视觉任务中的性能,我们在损失函数中集成了光照检测网络,战略性地平衡了像素融合率,并利用不同源图像的信息,优化了不同光照条件下的视觉效果。融合结果表明,我们的方法能在不同的照明条件下持续提供出色的背景对比度,并增强纹理和结构特征。我们还进行了消融和互补实验,进一步证实了我们提出的方法的有效性,并凸显了其在高级视觉任务中的潜力。
{"title":"ALFusion: Adaptive fusion for infrared and visible images under complex lighting conditions","authors":"Hanlin Xu ,&nbsp;Gang Liu ,&nbsp;Yao Qian ,&nbsp;Xiangbo Zhang ,&nbsp;Durga Prasad Bavirisetti","doi":"10.1016/j.dsp.2024.104864","DOIUrl":"10.1016/j.dsp.2024.104864","url":null,"abstract":"<div><div>In the task of infrared and visible image fusion, source images often exhibit complex and variable characteristics due to scene illumination. To address the challenges posed by complex lighting conditions and enhance the quality of fused images, we develop the ALFusion method, which combines dynamic convolution and Transformer for infrared and visible image fusion. The core idea is to utilize the adaptability of dynamic convolution coupled with the superior long-range modeling capabilities of the Transformer to design a hybrid feature extractor. This extractor dynamically and comprehensively captures features from source images under various lighting conditions. To enhance the effectiveness of mixed features, we integrate an elaborate multi-scale attention enhancement module into the skip connections of the U-Net architecture. In this module, we use convolutional kernels of various sizes to expand the receptive field and incorporate an attention mechanism to enhance and highlight the combined use of dynamic convolution and the Transformer. Considering the performance in advanced visual tasks, an illumination detection network is integrated into the loss function, strategically balancing the pixel fusion ratio and optimizing visual impacts under varied lighting conditions, leveraging information from different source images. The fusion results indicate that our method consistently delivers superior background contrast and enhanced texture and structural features across diverse lighting conditions. Ablation and complementary experiments have been conducted, further affirming the effectiveness of our proposed method and highlighting its potential in advanced visual tasks.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104864"},"PeriodicalIF":2.9,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142704957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient multimodal object detection via coordinate attention fusion for adverse environmental conditions 在不利环境条件下通过协调注意力融合实现高效的多模态目标检测
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-15 DOI: 10.1016/j.dsp.2024.104873
Xiangjin Zeng , Genghuan Liu , Jianming Chen , Xiaoyan Wu , Jianglei Di , Zhenbo Ren , Yuwen Qin
Integrating complementary visual information from multimodal image pairs can significantly improve the robustness and accuracy of object detection algorithms, particularly in challenging environments. However, a key challenge lies in the effective fusion of modality-specific features within these algorithms. To address this, we propose a novel lightweight fusion module, termed the Coordinate Attention Fusion (CAF) module, built on the YOLOv5 object detection framework. The CAF module exploits differential amplification and coordinated attention mechanisms to selectively enhance distinctive cross-modal features, thereby preserving critical modality-specific information. To further optimize performance and reduce computational overhead, the two-stream backbone network has been refined, reducing the model's parameter count without compromising accuracy. Comprehensive experiments conducted on two benchmark multimodal datasets demonstrate that the proposed approach consistently surpasses conventional methods and outperforms existing state-of-the-art multimodal object detection algorithms. These findings underscore the potential of cross-modality fusion as a promising direction for improving object detection in adverse conditions.
整合来自多模态图像对的互补视觉信息可以显著提高物体检测算法的鲁棒性和准确性,尤其是在具有挑战性的环境中。然而,在这些算法中有效融合特定模态特征是一个关键挑战。为了解决这个问题,我们在 YOLOv5 物体检测框架的基础上提出了一个新颖的轻量级融合模块,称为 "坐标注意融合(CAF)"模块。CAF 模块利用差异放大和协调注意机制,有选择性地增强独特的跨模态特征,从而保留关键的特定模态信息。为了进一步优化性能和减少计算开销,我们对双流主干网络进行了改进,在不影响准确性的前提下减少了模型的参数数量。在两个基准多模态数据集上进行的综合实验表明,所提出的方法始终超越传统方法,并优于现有的最先进的多模态物体检测算法。这些发现凸显了跨模态融合作为改进不利条件下物体检测的一个有前途的方向的潜力。
{"title":"Efficient multimodal object detection via coordinate attention fusion for adverse environmental conditions","authors":"Xiangjin Zeng ,&nbsp;Genghuan Liu ,&nbsp;Jianming Chen ,&nbsp;Xiaoyan Wu ,&nbsp;Jianglei Di ,&nbsp;Zhenbo Ren ,&nbsp;Yuwen Qin","doi":"10.1016/j.dsp.2024.104873","DOIUrl":"10.1016/j.dsp.2024.104873","url":null,"abstract":"<div><div>Integrating complementary visual information from multimodal image pairs can significantly improve the robustness and accuracy of object detection algorithms, particularly in challenging environments. However, a key challenge lies in the effective fusion of modality-specific features within these algorithms. To address this, we propose a novel lightweight fusion module, termed the Coordinate Attention Fusion (CAF) module, built on the YOLOv5 object detection framework. The CAF module exploits differential amplification and coordinated attention mechanisms to selectively enhance distinctive cross-modal features, thereby preserving critical modality-specific information. To further optimize performance and reduce computational overhead, the two-stream backbone network has been refined, reducing the model's parameter count without compromising accuracy. Comprehensive experiments conducted on two benchmark multimodal datasets demonstrate that the proposed approach consistently surpasses conventional methods and outperforms existing state-of-the-art multimodal object detection algorithms. These findings underscore the potential of cross-modality fusion as a promising direction for improving object detection in adverse conditions.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104873"},"PeriodicalIF":2.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142704960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multidimensional knowledge distillation for multimodal scene classification of remote sensing images 面向遥感图像多模态场景分类的多维知识精馏
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-15 DOI: 10.1016/j.dsp.2024.104876
Xiaomin Fan , Wujie Zhou
The advancement of deep learning technology has significantly improved the performance of remote sensing image (RSI) scene classification. However, it is important to note that most RSI scene classification models heavily depend on complex structures, resulting in high computational requirements and substantial costs. This study addresses this issue by utilizing a state-of-the-art model compression technique known as knowledge distillation (KD). The objective of KD is to transfer extensive knowledge from an excellent teacher model to a lightweight student model. While existing models focus on guiding the student network to learn specific stage or scale features from the teacher network, they lack comprehensiveness. To enhance the model's feature representation capability in complex scenarios, this study proposes a multidimensional KD approach (MKD). MKD enables the student network (MKD-S) to learn the feature representation capability of the teacher network (MKD-T) at each stage through a hybrid KD method. Specifically, the encoder incorporates a local-global KD mechanism to capture both low-level local information and high-level global information based on feature differences. Moreover, the fusion stage introduces inter-layer relationship KD and intra-layer feature KD to account for the dependencies between intermediate features within the MKD-S and MKD-T models. Additionally, the discrete wavelet transform, known for its ability to capture frequency domain and time domain features, is applied in the decoding stage of the MKD-T. This integration of decoding features across layers enables the completion of the knowledge response in the MKD-S. Experimental results demonstrate the effectiveness of our MKD on two benchmark datasets: Vaihingen and Potsdam.
深度学习技术的进步极大地提高了遥感图像场景分类的性能。然而,需要注意的是,大多数RSI场景分类模型严重依赖于复杂的结构,导致高计算需求和大量成本。本研究通过利用称为知识蒸馏(KD)的最先进的模型压缩技术来解决这个问题。KD的目标是将广泛的知识从一个优秀的教师模型转移到一个轻量级的学生模型。现有模型侧重于引导学生网络从教师网络中学习特定阶段或规模特征,缺乏全面性。为了提高模型在复杂场景下的特征表示能力,本研究提出了一种多维KD方法(MKD)。MKD通过混合KD方法使学生网络(MKD- s)在每个阶段学习教师网络(MKD- t)的特征表示能力。具体来说,该编码器结合了局部-全局KD机制来捕获基于特征差异的低级局部信息和高级全局信息。此外,融合阶段引入了层间关系KD和层内特征KD,以解释MKD-S和MKD-T模型中中间特征之间的依赖关系。此外,离散小波变换以其捕获频域和时域特征的能力而闻名,被应用于MKD-T的解码阶段。这种跨层解码功能的集成使得在MKD-S中完成知识响应。实验结果证明了MKD在两个基准数据集上的有效性:Vaihingen和Potsdam。
{"title":"Multidimensional knowledge distillation for multimodal scene classification of remote sensing images","authors":"Xiaomin Fan ,&nbsp;Wujie Zhou","doi":"10.1016/j.dsp.2024.104876","DOIUrl":"10.1016/j.dsp.2024.104876","url":null,"abstract":"<div><div>The advancement of deep learning technology has significantly improved the performance of remote sensing image (RSI) scene classification. However, it is important to note that most RSI scene classification models heavily depend on complex structures, resulting in high computational requirements and substantial costs. This study addresses this issue by utilizing a state-of-the-art model compression technique known as knowledge distillation (KD). The objective of KD is to transfer extensive knowledge from an excellent teacher model to a lightweight student model. While existing models focus on guiding the student network to learn specific stage or scale features from the teacher network, they lack comprehensiveness. To enhance the model's feature representation capability in complex scenarios, this study proposes a multidimensional KD approach (MKD). MKD enables the student network (MKD-S) to learn the feature representation capability of the teacher network (MKD-T) at each stage through a hybrid KD method. Specifically, the encoder incorporates a local-global KD mechanism to capture both low-level local information and high-level global information based on feature differences. Moreover, the fusion stage introduces inter-layer relationship KD and intra-layer feature KD to account for the dependencies between intermediate features within the MKD-S and MKD-T models. Additionally, the discrete wavelet transform, known for its ability to capture frequency domain and time domain features, is applied in the decoding stage of the MKD-T. This integration of decoding features across layers enables the completion of the knowledge response in the MKD-S. Experimental results demonstrate the effectiveness of our MKD on two benchmark datasets: Vaihingen and Potsdam.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"157 ","pages":"Article 104876"},"PeriodicalIF":2.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic Augmented L-shaped Array design and 2D Cramér-Rao bound analysis based on Vertical-Horizontal Moving Scheme 基于垂直-水平移动方案的合成增强 L 型阵列设计和二维克拉梅尔-拉奥约束分析
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-15 DOI: 10.1016/j.dsp.2024.104867
Danni Feng , Guiyu Wang , Xiangnan Li , Weijiang Wang , Shiwei Ren
This paper introduces an innovative approach to enhance the effectiveness of the L-shaped array using synthetic aperture processing. Through the implementation of the Vertical-Horizontal Moving Scheme (VHMS), a novel 2D array structure called the Synthetic Augmented L-shaped Array (SALA) is developed, featuring varied element spacing along the x and y axes of a 1D linear array. Comparative analyses demonstrate that the SALA achieves a larger difference coarray and higher degrees of freedom compared to arrays with an equivalent number of physical elements. The paper also provides a comprehensive derivation of the Cramér-Rao Bound (CRB) for 2D moving arrays, with detailed analyses of its conditions and properties. Simulation results highlight the SALA's superior capabilities in detecting multiple sources and providing precise 2D direction of arrival estimation, along with a lower CRB, indicating its potential for practical implementation in diverse scenarios.
本文介绍了一种利用合成孔径处理增强 L 形阵列有效性的创新方法。通过实施垂直-水平移动方案(VHMS),开发了一种名为合成增强 L 形阵列(SALA)的新型二维阵列结构,其特点是沿一维线性阵列的 x 轴和 y 轴改变元素间距。对比分析表明,与物理元素数量相当的阵列相比,SALA 阵列具有更大的共阵列差和更高的自由度。论文还全面推导了二维移动阵列的克拉梅尔-拉奥约束 (CRB),并对其条件和属性进行了详细分析。仿真结果凸显了 SALA 在检测多个信号源和提供精确的二维到达方向估计方面的卓越能力,以及较低的 CRB,这表明它具有在各种场景中实际应用的潜力。
{"title":"Synthetic Augmented L-shaped Array design and 2D Cramér-Rao bound analysis based on Vertical-Horizontal Moving Scheme","authors":"Danni Feng ,&nbsp;Guiyu Wang ,&nbsp;Xiangnan Li ,&nbsp;Weijiang Wang ,&nbsp;Shiwei Ren","doi":"10.1016/j.dsp.2024.104867","DOIUrl":"10.1016/j.dsp.2024.104867","url":null,"abstract":"<div><div>This paper introduces an innovative approach to enhance the effectiveness of the L-shaped array using synthetic aperture processing. Through the implementation of the Vertical-Horizontal Moving Scheme (VHMS), a novel 2D array structure called the Synthetic Augmented L-shaped Array (SALA) is developed, featuring varied element spacing along the <em>x</em> and <em>y</em> axes of a 1D linear array. Comparative analyses demonstrate that the SALA achieves a larger difference coarray and higher degrees of freedom compared to arrays with an equivalent number of physical elements. The paper also provides a comprehensive derivation of the Cramér-Rao Bound (CRB) for 2D moving arrays, with detailed analyses of its conditions and properties. Simulation results highlight the SALA's superior capabilities in detecting multiple sources and providing precise 2D direction of arrival estimation, along with a lower CRB, indicating its potential for practical implementation in diverse scenarios.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104867"},"PeriodicalIF":2.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142704956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive polarimetric persymmetric detection for distributed subspace targets in lognormal texture clutter 对数正态纹理杂波中分布式子空间目标的自适应偏振不对称检测
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-15 DOI: 10.1016/j.dsp.2024.104872
Lichao Liu , Qiang Guo , Yuhang Tian , Mykola Kaliuzhnyi , Vladimir Tuz
In this paper, the adaptive polarimetric persymmetric detection for distributed subspace targets under the background of compound Gaussian clutter is investigated, where the compound Gaussian clutter exhibits texture that follows a lognormal distribution. Based on the two-step Generalized Likelihood Ratio Test (2S GLRT), two-step maximum a posteriori Generalized Likelihood Ratio Test (2S MAP GLRT), two-step Rao (2S Rao) test and two-step Wald (2S Wald) test, we have proposed four polarimetric persymmetric detectors. Initially, we model the target echo as a distributed subspace signal, assuming known clutter texture and polarization speckle covariance matrix (PSCM), and derive the corresponding test statistics. Then, the estimation of the lognormal texture is obtained through maximum a posteriori (MAP). Conventionally, a set of secondary data, which share the same PSCM as the cells under test (CUTs), is assumed to participate in the estimation of the PSCM, leveraging its inherent persymmetric property during the estimation process. Finally, the estimated values are substituted into the proposed test statistics to obtain fully adaptive polarimetric persymmetric detectors. Numerical experimental results using simulated data and measured sea clutter data demonstrate that the proposed four adaptive polarimetric persymmetric detectors exhibit a constant false alarm rate (CFAR) characteristic relative to the PSCM and satisfactory detection performance for distributed subspace targets.
本文研究了在复合高斯杂波背景下分布式子空间目标的自适应偏振持续对称检测,其中复合高斯杂波的纹理服从对数正态分布。基于两步广义似然比检验(2S GLRT)、两步最大后验广义似然比检验(2S MAP GLRT)、两步 Rao 检验(2S Rao)和两步 Wald 检验(2S Wald),我们提出了四种极性不对称检测器。首先,我们将目标回波建模为分布式子空间信号,假设已知杂波纹理和偏振斑点协方差矩阵(PSCM),并推导出相应的测试统计量。然后,通过最大后验法(MAP)对对数正态纹理进行估计。传统上,假设一组与被测单元(CUT)共享相同 PSCM 的辅助数据参与 PSCM 的估计,在估计过程中利用其固有的不对称特性。最后,将估算值代入所提出的测试统计中,就能获得完全自适应的偏振对称检测器。使用模拟数据和测得的海杂波数据得出的数值实验结果表明,相对于 PSCM,所提出的四种自适应偏振非对称探测器具有恒定误报率(CFAR)特性,对分布式子空间目标的探测性能令人满意。
{"title":"Adaptive polarimetric persymmetric detection for distributed subspace targets in lognormal texture clutter","authors":"Lichao Liu ,&nbsp;Qiang Guo ,&nbsp;Yuhang Tian ,&nbsp;Mykola Kaliuzhnyi ,&nbsp;Vladimir Tuz","doi":"10.1016/j.dsp.2024.104872","DOIUrl":"10.1016/j.dsp.2024.104872","url":null,"abstract":"<div><div>In this paper, the adaptive polarimetric persymmetric detection for distributed subspace targets under the background of compound Gaussian clutter is investigated, where the compound Gaussian clutter exhibits texture that follows a lognormal distribution. Based on the two-step Generalized Likelihood Ratio Test (2S GLRT), two-step maximum a posteriori Generalized Likelihood Ratio Test (2S MAP GLRT), two-step Rao (2S Rao) test and two-step Wald (2S Wald) test, we have proposed four polarimetric persymmetric detectors. Initially, we model the target echo as a distributed subspace signal, assuming known clutter texture and polarization speckle covariance matrix (PSCM), and derive the corresponding test statistics. Then, the estimation of the lognormal texture is obtained through maximum a posteriori (MAP). Conventionally, a set of secondary data, which share the same PSCM as the cells under test (CUTs), is assumed to participate in the estimation of the PSCM, leveraging its inherent persymmetric property during the estimation process. Finally, the estimated values are substituted into the proposed test statistics to obtain fully adaptive polarimetric persymmetric detectors. Numerical experimental results using simulated data and measured sea clutter data demonstrate that the proposed four adaptive polarimetric persymmetric detectors exhibit a constant false alarm rate (CFAR) characteristic relative to the PSCM and satisfactory detection performance for distributed subspace targets.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104872"},"PeriodicalIF":2.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MFFR-net: Multi-scale feature fusion and attentive recalibration network for deep neural speech enhancement MFFR-net:用于深度神经语音增强的多尺度特征融合和注意力重新校准网络
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-14 DOI: 10.1016/j.dsp.2024.104870
Nasir Saleem , Sami Bourouis
Deep neural networks (DNNs) have been successfully applied in advancing speech enhancement (SE), particularly in overcoming the challenges posed by nonstationary noisy backgrounds. In this context, multi-scale feature fusion and recalibration (MFFR) can improve speech enhancement performance by combining multi-scale and recalibrated features. This paper proposes a speech enhancement system that capitalizes on a large-scale pre-trained model, seamlessly fused with features attentively recalibrated using varying kernel sizes in convolutional layers. This process enables the SE system to capture features across diverse scales, enhancing its overall performance. The proposed SE system uses a transferable features extractor architecture and integrates with multi-scaled attentively recalibrated features. Utilizing 2D-convolutional layers, the convolutional encoder-decoder extracts both local and contextual features from speech signals. To capture long-term temporal dependencies, a bidirectional simple recurrent unit (BSRU) serves as a bottleneck layer positioned between the encoder and decoder. The experiments are conducted on three publicly available datasets including Texas Instruments/Massachusetts Institute of Technology (TIMIT), LibriSpeech, and Voice Cloning Toolkit+Diverse Environments Multi-channel Acoustic Noise Database (VCTK+DEMAND). The experimental results show that the proposed SE system performs better than several recent approaches on the Short-Time Objective Intelligibility (STOI) and Perceptual Evaluation of Speech Quality (PESQ) evaluation metrics. On the TIMIT dataset, the proposed system showcases a considerable improvement in STOI (17.3%) and PESQ (0.74) over the noisy mixture. The evaluation on the LibriSpeech dataset yields results with a 17.6% and 0.87 improvement in STOI and PESQ.
深度神经网络(DNN)已成功应用于语音增强(SE),尤其是在克服非稳态噪声背景带来的挑战方面。在这种情况下,多尺度特征融合和重新校准(MFFR)可通过结合多尺度和重新校准特征来提高语音增强性能。本文提出了一种语音增强系统,该系统利用大规模预训练模型,与卷积层中使用不同核大小重新校准的特征进行无缝融合。这一过程使 SE 系统能够捕捉不同尺度的特征,从而提高其整体性能。拟议的 SE 系统采用了可转移的特征提取器架构,并与多尺度的专心重新校准特征相结合。利用二维卷积层,卷积编码器-解码器可从语音信号中提取局部和上下文特征。为了捕捉长期的时间依赖性,双向简单递归单元(BSRU)作为瓶颈层位于编码器和解码器之间。实验在三个公开数据集上进行,包括德州仪器/麻省理工学院(TIMIT)、LibriSpeech 和 Voice Cloning Toolkit+Diverse Environments Multi-channel Acoustic Noise Database(VCTK+DEMAND)。实验结果表明,在短时客观可懂度(STOI)和语音质量感知评估(PESQ)评估指标上,所提出的 SE 系统的表现优于最近的几种方法。在 TIMIT 数据集上,建议的系统比噪声混合物的 STOI(17.3%)和 PESQ(0.74)都有显著提高。在 LibriSpeech 数据集上的评估结果显示,STOI 和 PESQ 分别提高了 17.6% 和 0.87%。
{"title":"MFFR-net: Multi-scale feature fusion and attentive recalibration network for deep neural speech enhancement","authors":"Nasir Saleem ,&nbsp;Sami Bourouis","doi":"10.1016/j.dsp.2024.104870","DOIUrl":"10.1016/j.dsp.2024.104870","url":null,"abstract":"<div><div>Deep neural networks (DNNs) have been successfully applied in advancing speech enhancement (SE), particularly in overcoming the challenges posed by nonstationary noisy backgrounds. In this context, multi-scale feature fusion and recalibration (MFFR) can improve speech enhancement performance by combining multi-scale and recalibrated features. This paper proposes a speech enhancement system that capitalizes on a large-scale pre-trained model, seamlessly fused with features attentively recalibrated using varying kernel sizes in convolutional layers. This process enables the SE system to capture features across diverse scales, enhancing its overall performance. The proposed SE system uses a transferable features extractor architecture and integrates with multi-scaled attentively recalibrated features. Utilizing 2D-convolutional layers, the convolutional encoder-decoder extracts both local and contextual features from speech signals. To capture long-term temporal dependencies, a bidirectional simple recurrent unit (BSRU) serves as a bottleneck layer positioned between the encoder and decoder. The experiments are conducted on three publicly available datasets including Texas Instruments/Massachusetts Institute of Technology (TIMIT), LibriSpeech, and Voice Cloning Toolkit+Diverse Environments Multi-channel Acoustic Noise Database (VCTK+DEMAND). The experimental results show that the proposed SE system performs better than several recent approaches on the Short-Time Objective Intelligibility (STOI) and Perceptual Evaluation of Speech Quality (PESQ) evaluation metrics. On the TIMIT dataset, the proposed system showcases a considerable improvement in STOI (17.3%) and PESQ (0.74) over the noisy mixture. The evaluation on the LibriSpeech dataset yields results with a 17.6% and 0.87 improvement in STOI and PESQ.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104870"},"PeriodicalIF":2.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive weights-based relaxed broad learning system for imbalanced classification 用于不平衡分类的基于权重的自适应宽松学习系统
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-14 DOI: 10.1016/j.dsp.2024.104869
Yanting Li , Yiping Gao , Junwei Jin , Jiaofen Nan , Yinghui Meng , Mengjie Wang , C.L. Philip Chen
As a shallow neural network, broad learning system (BLS) has gained significant attention in both academia and industry due to its efficiency and effectiveness. However, BLS and its variants are suboptimal when confronted with imbalanced data scenarios. Firstly, strict binary labeling strategy hinders effective disparities between different classes. Secondly, they generally do not distinguish between the contributions of minority and majority classes, resulting in classification outcomes biased toward the majority classes. To address these deficiencies, we propose an adaptive weights-based relaxed broad learning system for handling imbalanced classification tasks. We provide a label relaxation technique to construct a novel label matrix that not only widens the margins between classes but also maintains label consistency within each class. Additionally, an adaptive weighting strategy assigns higher weights to minority samples based on density information within and between classes. This enables the model to learn a more discriminative transformation matrix for imbalanced classification. The alternating direction method of multipliers algorithm is employed to solve the resulting model. Experimental results on numerous public imbalanced data sets demonstrate the effectiveness and efficiency of the proposed method.
作为一种浅层神经网络,广义学习系统(BLS)因其高效性和有效性在学术界和工业界都获得了极大的关注。然而,BLS 及其变体在面对不平衡数据场景时并不理想。首先,严格的二元标记策略阻碍了不同类别之间的有效差异。其次,它们通常无法区分少数类和多数类的贡献,导致分类结果偏向于多数类。针对这些不足,我们提出了一种基于自适应权重的宽松学习系统,用于处理不平衡分类任务。我们提供了一种标签松弛技术,用于构建新颖的标签矩阵,它不仅能扩大类别之间的差距,还能保持每个类别内标签的一致性。此外,一种自适应加权策略会根据类内和类间的密度信息为少数样本分配更高的权重。这样,模型就能为不平衡分类学习更具区分性的转换矩阵。该模型采用交替方向乘法算法进行求解。在大量公共不平衡数据集上的实验结果证明了所提方法的有效性和效率。
{"title":"Adaptive weights-based relaxed broad learning system for imbalanced classification","authors":"Yanting Li ,&nbsp;Yiping Gao ,&nbsp;Junwei Jin ,&nbsp;Jiaofen Nan ,&nbsp;Yinghui Meng ,&nbsp;Mengjie Wang ,&nbsp;C.L. Philip Chen","doi":"10.1016/j.dsp.2024.104869","DOIUrl":"10.1016/j.dsp.2024.104869","url":null,"abstract":"<div><div>As a shallow neural network, broad learning system (BLS) has gained significant attention in both academia and industry due to its efficiency and effectiveness. However, BLS and its variants are suboptimal when confronted with imbalanced data scenarios. Firstly, strict binary labeling strategy hinders effective disparities between different classes. Secondly, they generally do not distinguish between the contributions of minority and majority classes, resulting in classification outcomes biased toward the majority classes. To address these deficiencies, we propose an adaptive weights-based relaxed broad learning system for handling imbalanced classification tasks. We provide a label relaxation technique to construct a novel label matrix that not only widens the margins between classes but also maintains label consistency within each class. Additionally, an adaptive weighting strategy assigns higher weights to minority samples based on density information within and between classes. This enables the model to learn a more discriminative transformation matrix for imbalanced classification. The alternating direction method of multipliers algorithm is employed to solve the resulting model. Experimental results on numerous public imbalanced data sets demonstrate the effectiveness and efficiency of the proposed method.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104869"},"PeriodicalIF":2.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142703867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MoCoDiff: Momentum context diffusion model for low-dose CT denoising MoCoDiff:用于低剂量 CT 去噪的动量背景扩散模型
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-13 DOI: 10.1016/j.dsp.2024.104868
Shaoting Zhao , Ailian Jiang , Jianguo Ding
Low-Dose Computed Tomography (LDCT) has gradually replaced Normal-Dose Computed Tomography (NDCT) due to its lower radiation exposure. However, the reduction in radiation dose has led to increased noise and artifacts in LDCT images. To date, many methods for LDCT denoising have emerged, but they often struggle to balance denoising performance with reconstruction efficiency. This paper presents a novel Momentum Context Diffusion model for low-dose CT denoising, termed MoCoDiff. First, MoCoDiff employs a Mean-Preserving Stochastic Degradation (MPSD) operator to gradually degrade NDCT to LDCT, effectively simulating the physical process of CT degradation and greatly reducing sampling steps. Furthermore, the stochastic nature of the MPSD operator enhances the diversity of samples in the training space and calibrates the deviation between network inputs and time-step embedded features. Second, we propose a Momentum Context (MoCo) strategy. This strategy uses the most recent sampling result from each step to update the context information, thereby narrowing the noise level gap between the sampling results and the context data. This approach helps to better guide the next sampling step. Finally, to prevent issues such as over-smoothing of image edges that can arise from using the mean square error loss function, we develop a dual-domain loss function that operates in both the image and wavelet domains. This approach leverages wavelet domain information to encourage the model to preserve structural details in the images more effectively. Extensive experimental results show that our MoCoDiff model outperforms competing methods in both denoising and generalization performance, while also ensuring fast training and inference.
低剂量计算机断层扫描(LDCT)因其辐射量较低而逐渐取代了正常剂量计算机断层扫描(NDCT)。然而,辐射剂量的减少导致 LDCT 图像中的噪声和伪影增加。迄今为止,已经出现了许多 LDCT 去噪方法,但它们往往难以在去噪性能和重建效率之间取得平衡。本文提出了一种用于低剂量 CT 去噪的新型动量上下文扩散模型,称为 MoCoDiff。首先,MoCoDiff 采用平均保留随机降解(MPSD)算子将 NDCT 逐步降解为 LDCT,有效模拟了 CT 降解的物理过程,大大减少了采样步骤。此外,MPSD 算子的随机性增强了训练空间中样本的多样性,并校准了网络输入与时间步嵌入特征之间的偏差。其次,我们提出了一种动量上下文(MoCo)策略。该策略使用每一步的最新采样结果来更新上下文信息,从而缩小采样结果与上下文数据之间的噪声水平差距。这种方法有助于更好地指导下一步采样。最后,为了防止使用均方误差损失函数可能产生的图像边缘过度平滑等问题,我们开发了一种双域损失函数,可在图像域和小波域同时运行。这种方法利用小波域信息,促使模型更有效地保留图像中的结构细节。广泛的实验结果表明,我们的 MoCoDiff 模型在去噪和泛化性能方面都优于其他竞争方法,同时还能确保快速训练和推理。
{"title":"MoCoDiff: Momentum context diffusion model for low-dose CT denoising","authors":"Shaoting Zhao ,&nbsp;Ailian Jiang ,&nbsp;Jianguo Ding","doi":"10.1016/j.dsp.2024.104868","DOIUrl":"10.1016/j.dsp.2024.104868","url":null,"abstract":"<div><div>Low-Dose Computed Tomography (LDCT) has gradually replaced Normal-Dose Computed Tomography (NDCT) due to its lower radiation exposure. However, the reduction in radiation dose has led to increased noise and artifacts in LDCT images. To date, many methods for LDCT denoising have emerged, but they often struggle to balance denoising performance with reconstruction efficiency. This paper presents a novel Momentum Context Diffusion model for low-dose CT denoising, termed MoCoDiff. First, MoCoDiff employs a Mean-Preserving Stochastic Degradation (MPSD) operator to gradually degrade NDCT to LDCT, effectively simulating the physical process of CT degradation and greatly reducing sampling steps. Furthermore, the stochastic nature of the MPSD operator enhances the diversity of samples in the training space and calibrates the deviation between network inputs and time-step embedded features. Second, we propose a Momentum Context (MoCo) strategy. This strategy uses the most recent sampling result from each step to update the context information, thereby narrowing the noise level gap between the sampling results and the context data. This approach helps to better guide the next sampling step. Finally, to prevent issues such as over-smoothing of image edges that can arise from using the mean square error loss function, we develop a dual-domain loss function that operates in both the image and wavelet domains. This approach leverages wavelet domain information to encourage the model to preserve structural details in the images more effectively. Extensive experimental results show that our MoCoDiff model outperforms competing methods in both denoising and generalization performance, while also ensuring fast training and inference.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104868"},"PeriodicalIF":2.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142704955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient recurrent real video restoration 高效的循环真实视频修复
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-13 DOI: 10.1016/j.dsp.2024.104851
Antoni Buades, Jose-Luis Lisani
We propose a novel method that addresses the most common limitations of real video sequences, including noise, blur, flicker, and low contrast. This method leverages the Discrete Cosine Transform (DCT) extensively for both deblurring and denoising tasks, ensuring computational efficiency. It also incorporates classical strategies for tonal stabilization and low-light enhancement. To the best of our knowledge, this is the first unified framework that tackles all these problems simultaneously. Compared to state-of-the-art learning-based methods for denoising and deblurring, our approach achieves better results while offering additional benefits such as full interpretability, reduced memory usage, and lighter computational requirements, making it well-suited for integration into mobile device processing chains.
我们提出了一种新方法,可解决真实视频序列中最常见的限制因素,包括噪声、模糊、闪烁和低对比度。该方法广泛利用离散余弦变换(DCT)来完成去模糊和去噪任务,确保了计算效率。此外,它还结合了色调稳定和弱光增强的经典策略。据我们所知,这是第一个同时解决所有这些问题的统一框架。与最先进的基于学习的去噪和去毛刺方法相比,我们的方法取得了更好的效果,同时提供了更多优势,如完全可解释性、减少内存使用和降低计算要求,使其非常适合集成到移动设备处理链中。
{"title":"Efficient recurrent real video restoration","authors":"Antoni Buades,&nbsp;Jose-Luis Lisani","doi":"10.1016/j.dsp.2024.104851","DOIUrl":"10.1016/j.dsp.2024.104851","url":null,"abstract":"<div><div>We propose a novel method that addresses the most common limitations of real video sequences, including noise, blur, flicker, and low contrast. This method leverages the Discrete Cosine Transform (DCT) extensively for both deblurring and denoising tasks, ensuring computational efficiency. It also incorporates classical strategies for tonal stabilization and low-light enhancement. To the best of our knowledge, this is the first unified framework that tackles all these problems simultaneously. Compared to state-of-the-art learning-based methods for denoising and deblurring, our approach achieves better results while offering additional benefits such as full interpretability, reduced memory usage, and lighter computational requirements, making it well-suited for integration into mobile device processing chains.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104851"},"PeriodicalIF":2.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PV-YOLO: A lightweight pedestrian and vehicle detection model based on improved YOLOv8 PV-YOLO:基于改进型 YOLOv8 的轻量级行人和车辆检测模型
IF 2.9 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-13 DOI: 10.1016/j.dsp.2024.104857
Yuhang Liu , Zhenghua Huang , Qiong Song , Kun Bai
With the frequent occurrence of urban traffic accidents, fast and accurate detection of pedestrian and vehicle targets has become one of the key technologies for intelligent assisted driving systems. To meet the efficiency and lightweight requirements of smart devices, this paper proposes a lightweight pedestrian and vehicle detection model based on the YOLOv8n model, named PV-YOLO. In the proposed model, receptive-field attention convolution (RFAConv) serves as the backbone network because of its target feature extraction ability, and the neck utilizes the bidirectional feature pyramid network (BiFPN) instead of the original path aggregation network (PANet) to simplify the feature fusion process. Moreover, a lightweight detection head is introduced to reduce the computational burden and improve the overall detection accuracy. In addition, a small target detection layer is designed to improve the accuracy for small distant targets. Finally, to reduce the computational burden further, the lightweight C2f module is utilized to compress the model. The experimental results on the BDD100K and KITTI datasets demonstrate that the proposed PV-YOLO can achieve higher detection accuracy than YOLOv8n and other baseline methods with less model complexity.
随着城市交通事故的频发,快速准确地检测行人和车辆目标已成为智能辅助驾驶系统的关键技术之一。为了满足智能设备高效、轻量的要求,本文提出了一种基于 YOLOv8n 模型的轻量级行人和车辆检测模型,命名为 PV-YOLO。在该模型中,感受野注意卷积(RFAConv)因其目标特征提取能力而成为骨干网络,颈部利用双向特征金字塔网络(BiFPN)代替原始路径聚合网络(PANet),以简化特征融合过程。此外,还引入了轻量级检测头,以减轻计算负担,提高整体检测精度。此外,还设计了一个小目标检测层,以提高对远处小目标的检测精度。最后,为了进一步减轻计算负担,利用轻量级 C2f 模块来压缩模型。在 BDD100K 和 KITTI 数据集上的实验结果表明,与 YOLOv8n 和其他基线方法相比,所提出的 PV-YOLO 能以更低的模型复杂度获得更高的检测精度。
{"title":"PV-YOLO: A lightweight pedestrian and vehicle detection model based on improved YOLOv8","authors":"Yuhang Liu ,&nbsp;Zhenghua Huang ,&nbsp;Qiong Song ,&nbsp;Kun Bai","doi":"10.1016/j.dsp.2024.104857","DOIUrl":"10.1016/j.dsp.2024.104857","url":null,"abstract":"<div><div>With the frequent occurrence of urban traffic accidents, fast and accurate detection of pedestrian and vehicle targets has become one of the key technologies for intelligent assisted driving systems. To meet the efficiency and lightweight requirements of smart devices, this paper proposes a lightweight pedestrian and vehicle detection model based on the YOLOv8n model, named PV-YOLO. In the proposed model, receptive-field attention convolution (RFAConv) serves as the backbone network because of its target feature extraction ability, and the neck utilizes the bidirectional feature pyramid network (BiFPN) instead of the original path aggregation network (PANet) to simplify the feature fusion process. Moreover, a lightweight detection head is introduced to reduce the computational burden and improve the overall detection accuracy. In addition, a small target detection layer is designed to improve the accuracy for small distant targets. Finally, to reduce the computational burden further, the lightweight C2f module is utilized to compress the model. The experimental results on the BDD100K and KITTI datasets demonstrate that the proposed PV-YOLO can achieve higher detection accuracy than YOLOv8n and other baseline methods with less model complexity.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"156 ","pages":"Article 104857"},"PeriodicalIF":2.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Digital Signal Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1