首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
Multi-Frame Adaptive Image Enhancement Algorithm for vehicle-mounted dynamic scenes 车载动态场景的多帧自适应图像增强算法
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-26 DOI: 10.1016/j.image.2025.117458
Jing Li , Tao Chen , Xiangyu Han , Xilin Luan , Jintao Li
To address the issue of image blurring caused by high-speed vehicle motion and complex road conditions in autonomous driving scenarios, this paper proposes a lightweight Multi-frame Adaptive Image Enhancement Network (MAIE-Net). The network innovatively introduces a hybrid motion compensation mechanism that integrates optical flow alignment and deformable convolution, effectively solving the non-rigid motion alignment problem in complex dynamic scenes. Additionally, a temporal feature enhancement module is constructed, leveraging 3D convolution and attention mechanisms to achieve adaptive fusion of multi-frame information. In terms of architecture design, an edge-guided U-Net structure is employed for multi-scale feature extraction and reconstruction. The framework incorporates edge feature extraction and attention mechanisms within the encoder–decoder to balance feature representation and computational efficiency. The overall lightweight design enables the model to adapt to in-vehicle computing platforms. Experimental results demonstrate that the proposed method significantly improves image quality while maintaining efficient real-time processing capabilities, effectively enhancing the environmental perception performance of in-vehicle vision systems across various driving scenarios, thereby providing a reliable visual enhancement solution for autonomous driving.
为了解决自动驾驶场景中高速车辆运动和复杂路况导致的图像模糊问题,本文提出了一种轻量级的多帧自适应图像增强网络(MAIE-Net)。该网络创新性地引入了光流对准与可变形卷积相结合的混合运动补偿机制,有效地解决了复杂动态场景下的非刚性运动对准问题。构建时间特征增强模块,利用三维卷积和注意机制实现多帧信息的自适应融合。在结构设计上,采用边缘引导的U-Net结构进行多尺度特征提取与重构。该框架在编码器-解码器中结合了边缘特征提取和注意机制,以平衡特征表示和计算效率。整体轻量化设计使该模型能够适应车载计算平台。实验结果表明,该方法在保持高效实时处理能力的同时,显著提高了图像质量,有效增强了车载视觉系统在各种驾驶场景下的环境感知性能,为自动驾驶提供了可靠的视觉增强解决方案。
{"title":"Multi-Frame Adaptive Image Enhancement Algorithm for vehicle-mounted dynamic scenes","authors":"Jing Li ,&nbsp;Tao Chen ,&nbsp;Xiangyu Han ,&nbsp;Xilin Luan ,&nbsp;Jintao Li","doi":"10.1016/j.image.2025.117458","DOIUrl":"10.1016/j.image.2025.117458","url":null,"abstract":"<div><div>To address the issue of image blurring caused by high-speed vehicle motion and complex road conditions in autonomous driving scenarios, this paper proposes a lightweight Multi-frame Adaptive Image Enhancement Network (MAIE-Net). The network innovatively introduces a hybrid motion compensation mechanism that integrates optical flow alignment and deformable convolution, effectively solving the non-rigid motion alignment problem in complex dynamic scenes. Additionally, a temporal feature enhancement module is constructed, leveraging 3D convolution and attention mechanisms to achieve adaptive fusion of multi-frame information. In terms of architecture design, an edge-guided U-Net structure is employed for multi-scale feature extraction and reconstruction. The framework incorporates edge feature extraction and attention mechanisms within the encoder–decoder to balance feature representation and computational efficiency. The overall lightweight design enables the model to adapt to in-vehicle computing platforms. Experimental results demonstrate that the proposed method significantly improves image quality while maintaining efficient real-time processing capabilities, effectively enhancing the environmental perception performance of in-vehicle vision systems across various driving scenarios, thereby providing a reliable visual enhancement solution for autonomous driving.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117458"},"PeriodicalIF":2.7,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-light image enhancement via boundary constraints and non-local similarity 基于边界约束和非局部相似度的微光图像增强
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-25 DOI: 10.1016/j.image.2025.117459
Wan Li , Hengji Xie , Bin Yao , Xiaolin Zhang , Rongrong Fei
Low light often leads to poor image visibility, which can easily affect the performance of computer vision algorithms. The traditional enhancement methods focus excessively on illumination map restoration while neglecting the non-local similarity in natural images. In this paper, we propose an effective low-light image enhancement method based on boundary constraints and non-local similarity. First, a fast and effective boundary constraints method is proposed to estimate illumination maps. Then, a combined optimization model with low-rank and context constraints was presented to improve the enhancing results. Among them, low-rank constraints are used to capture the non-local similarity of the reflectance image, and context constraints are used to improve the accuracy of the illumination map. Finally, alternating iterative optimization is employed for solving non-independent constraints between the illumination and reflectance maps. Experimental results demonstrate that the proposed algorithm enhances images efficiently in terms of both objective quality and subjective quality.
低光照往往会导致图像可见度差,这很容易影响计算机视觉算法的性能。传统的增强方法过于注重光照地图的恢复,而忽略了自然图像的非局部相似度。本文提出了一种有效的基于边界约束和非局部相似度的弱光图像增强方法。首先,提出了一种快速有效的边界约束估计光照贴图的方法。在此基础上,提出了基于低秩约束和上下文约束的组合优化模型,提高了优化效果。其中,低秩约束用于捕获反射图像的非局部相似度,上下文约束用于提高光照图的精度。最后,采用交替迭代优化方法求解光照映射和反射率映射之间的非独立约束。实验结果表明,该算法能有效地提高图像的客观质量和主观质量。
{"title":"Low-light image enhancement via boundary constraints and non-local similarity","authors":"Wan Li ,&nbsp;Hengji Xie ,&nbsp;Bin Yao ,&nbsp;Xiaolin Zhang ,&nbsp;Rongrong Fei","doi":"10.1016/j.image.2025.117459","DOIUrl":"10.1016/j.image.2025.117459","url":null,"abstract":"<div><div>Low light often leads to poor image visibility, which can easily affect the performance of computer vision algorithms. The traditional enhancement methods focus excessively on illumination map restoration while neglecting the non-local similarity in natural images. In this paper, we propose an effective low-light image enhancement method based on boundary constraints and non-local similarity. First, a fast and effective boundary constraints method is proposed to estimate illumination maps. Then, a combined optimization model with low-rank and context constraints was presented to improve the enhancing results. Among them, low-rank constraints are used to capture the non-local similarity of the reflectance image, and context constraints are used to improve the accuracy of the illumination map. Finally, alternating iterative optimization is employed for solving non-independent constraints between the illumination and reflectance maps. Experimental results demonstrate that the proposed algorithm enhances images efficiently in terms of both objective quality and subjective quality.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117459"},"PeriodicalIF":2.7,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NFlowAD: A normalizing flow model for anomaly detection in human motion animations NFlowAD:用于人体运动动画异常检测的规范化流模型
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-24 DOI: 10.1016/j.image.2025.117469
Mahamat Issa Choueb , Praveen Kumar Sekharamantry , Giulia Martinelli , Francesco De Natale , Nicola Conci
Anomaly detection has been extensively investigated in numerous application areas. Hand-crafted rules have gradually given way to supervised classification techniques, which frequently rely on a small number of anomaly labels and related architectures. When it comes to human motion, abnormalities emerge at a fine-grained temporal or joint level rather than over a whole video sequence.
This study introduces NFlowAD, a self-supervised system that analyzes body joints to detect irregularities in human motion. It blends normalizing flows with masked motion modeling to describe normal motion data without the need for anomaly labels. Inference uses both reconstruction mistakes and flow-based likelihoods to detect anomalies. The validation pipeline on various state-of-the-art datasets demonstrates NFlowAD’s efficiency in recognizing, locating, and analyzing anomalous motion sequences, while maintaining robust detection and interpretability.
异常检测在许多应用领域得到了广泛的研究。手工规则逐渐让位于监督分类技术,后者经常依赖于少量异常标签和相关体系结构。当涉及到人体运动时,异常出现在细粒度的时间或关节水平,而不是整个视频序列。该研究介绍了一种自我监督系统NFlowAD,该系统可以分析人体关节,以检测人体运动的不规则性。它混合了规范化流和屏蔽运动建模来描述正常运动数据,而不需要异常标签。推理使用重建错误和基于流的可能性来检测异常。在各种最新数据集上的验证管道证明了NFlowAD在识别、定位和分析异常运动序列方面的效率,同时保持了强大的检测和可解释性。
{"title":"NFlowAD: A normalizing flow model for anomaly detection in human motion animations","authors":"Mahamat Issa Choueb ,&nbsp;Praveen Kumar Sekharamantry ,&nbsp;Giulia Martinelli ,&nbsp;Francesco De Natale ,&nbsp;Nicola Conci","doi":"10.1016/j.image.2025.117469","DOIUrl":"10.1016/j.image.2025.117469","url":null,"abstract":"<div><div>Anomaly detection has been extensively investigated in numerous application areas. Hand-crafted rules have gradually given way to supervised classification techniques, which frequently rely on a small number of anomaly labels and related architectures. When it comes to human motion, abnormalities emerge at a fine-grained temporal or joint level rather than over a whole video sequence.</div><div>This study introduces NFlowAD, a self-supervised system that analyzes body joints to detect irregularities in human motion. It blends normalizing flows with masked motion modeling to describe normal motion data without the need for anomaly labels. Inference uses both reconstruction mistakes and flow-based likelihoods to detect anomalies. The validation pipeline on various state-of-the-art datasets demonstrates NFlowAD’s efficiency in recognizing, locating, and analyzing anomalous motion sequences, while maintaining robust detection and interpretability.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117469"},"PeriodicalIF":2.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SACIFuse: Adaptive enhancement of salient features and cross-modal attention interaction for infrared and visible image fusion SACIFuse:显著特征的自适应增强和红外和可见光图像融合的跨模态注意交互
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-24 DOI: 10.1016/j.image.2025.117467
Hao Zhai, Anyu Li, Yan Wei, Huashan Tan, Yiyang Ru
The goal of infrared and visible light image fusion is to create images that highlight infrared thermal targets while preserving texture information under challenging lighting conditions. However, in extreme environments like heavy fog or overexposure, visible light images often contain redundant information, negatively affecting fusion results. To better emphasize salient targets in infrared images and reduce interference from redundant information, this paper proposes an adaptive salient enhancement fusion method for infrared and visible light images, called SACIFuse. First, we designed a Salient Feature Prediction Enhancement Module (SFEM), which extracts image gradients through edge operators and generates a mask quantifying the probability of redundant information. This mask is used to adaptively weight the source image, thereby suppressing redundant visible light information while enhancing infrared targets. Additionally, we introduced a Salient Feature Interaction Attention Module (SFIM), capable of employing residual attention combined with spatial and channel attention mechanisms to guide the interaction between the enhanced salient features and the source image features, ensuring that the fusion results highlight infrared targets while preserving visible light texture. Finally, our proposed loss function constructs a binary mask of the fused image to impose constraints on salient targets, effectively preventing adverse effects of redundant information on key regions. Extensive testing on public datasets shows that SACIfuse outperforms existing state-of-the-art methods in both qualitative and quantitative evaluations. Moreover, generalization experiments conducted on other datasets demonstrate that the proposed model exhibits strong generalization capabilities.
红外和可见光图像融合的目标是在具有挑战性的照明条件下创建突出红外热目标的图像,同时保留纹理信息。然而,在大雾或过度曝光等极端环境下,可见光图像往往包含冗余信息,对融合结果产生负面影响。为了更好地突出红外图像中的显著目标,减少冗余信息的干扰,本文提出了一种红外与可见光图像的自适应显著增强融合方法SACIFuse。首先,设计显著特征预测增强模块(SFEM),通过边缘算子提取图像梯度,生成量化冗余信息概率的掩模;该掩模用于自适应加权源图像,从而在增强红外目标的同时抑制冗余的可见光信息。此外,我们还引入了显著特征交互注意模块(SFIM),该模块能够利用剩余注意结合空间和通道注意机制来指导增强的显著特征与源图像特征之间的交互,确保融合结果在保留可见光纹理的同时突出红外目标。最后,我们提出的损失函数构建融合图像的二值掩模,对显著目标施加约束,有效防止冗余信息对关键区域的不利影响。对公共数据集的广泛测试表明,SACIfuse在定性和定量评估方面都优于现有的最先进方法。此外,在其他数据集上进行的泛化实验表明,该模型具有较强的泛化能力。
{"title":"SACIFuse: Adaptive enhancement of salient features and cross-modal attention interaction for infrared and visible image fusion","authors":"Hao Zhai,&nbsp;Anyu Li,&nbsp;Yan Wei,&nbsp;Huashan Tan,&nbsp;Yiyang Ru","doi":"10.1016/j.image.2025.117467","DOIUrl":"10.1016/j.image.2025.117467","url":null,"abstract":"<div><div>The goal of infrared and visible light image fusion is to create images that highlight infrared thermal targets while preserving texture information under challenging lighting conditions. However, in extreme environments like heavy fog or overexposure, visible light images often contain redundant information, negatively affecting fusion results. To better emphasize salient targets in infrared images and reduce interference from redundant information, this paper proposes an adaptive salient enhancement fusion method for infrared and visible light images, called SACIFuse. First, we designed a Salient Feature Prediction Enhancement Module (SFEM), which extracts image gradients through edge operators and generates a mask quantifying the probability of redundant information. This mask is used to adaptively weight the source image, thereby suppressing redundant visible light information while enhancing infrared targets. Additionally, we introduced a Salient Feature Interaction Attention Module (SFIM), capable of employing residual attention combined with spatial and channel attention mechanisms to guide the interaction between the enhanced salient features and the source image features, ensuring that the fusion results highlight infrared targets while preserving visible light texture. Finally, our proposed loss function constructs a binary mask of the fused image to impose constraints on salient targets, effectively preventing adverse effects of redundant information on key regions. Extensive testing on public datasets shows that SACIfuse outperforms existing state-of-the-art methods in both qualitative and quantitative evaluations. Moreover, generalization experiments conducted on other datasets demonstrate that the proposed model exhibits strong generalization capabilities.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117467"},"PeriodicalIF":2.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced ISAR imaging of UAVs: Noise reduction via weighted atomic norm minimization and 2D-ADMM 增强无人机ISAR成像:加权原子范数最小化和2D-ADMM降噪
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-24 DOI: 10.1016/j.image.2025.117468
Mohammad Roueinfar, Mohammad Hossein Kahaei
The effect of noise on the Inverse Synthetic Aperture Radar (ISAR) with sparse apertures is challenging for image reconstruction with high resolution at low Signal-to-Noise Ratios (SNRs). It is well-known that the image resolution is affected by the bandwidth of the transmitted signal and the Coherent Processing Interval (CPI) in two dimensions, range and azimuth, respectively. To reduce the noise effect and thus increase the two-dimensional resolution of Unmanned Aerial Vehicles (UAVs) images, we propose the Fast Reweighted Atomic Norm Denoising (FRAND) algorithm by incorporating the weighted atomic norm minimization. To solve the problem, the Two-Dimensional Alternating Direction Method of Multipliers (2D-ADMM) algorithm is developed to speed up the implementation procedure. Assuming sparse apertures for ISAR images of UAVs, we compare the proposed method with the MUltiple SIgnal Classification (MUSIC), Cadzow, and SL0 methods in different SNRs. Simulation results show the superiority of FRAND at low SNRs based on the Mean-Square Error (MSE), Peak Signal-to-Noise ratio (PSNR), and Structural Similarity Index Measure (SSIM) criteria.
噪声对稀疏孔径逆合成孔径雷达(ISAR)的影响是低信噪比下高分辨率图像重建的挑战。众所周知,图像分辨率分别在距离和方位角两个维度上受到传输信号带宽和相干处理间隔(CPI)的影响。为了降低噪声影响,提高无人机图像的二维分辨率,提出了一种基于加权原子范数最小化的快速加权原子范数去噪(FRAND)算法。为了解决这一问题,提出了二维乘法器交替方向法(2D-ADMM)算法,加快了算法的实现速度。假设无人机ISAR图像孔径稀疏,在不同信噪比下,将该方法与多信号分类(MUSIC)、Cadzow和SL0方法进行比较。仿真结果表明,基于均方误差(MSE)、峰值信噪比(PSNR)和结构相似指数度量(SSIM)标准的FRAND在低信噪比下具有优势。
{"title":"Enhanced ISAR imaging of UAVs: Noise reduction via weighted atomic norm minimization and 2D-ADMM","authors":"Mohammad Roueinfar,&nbsp;Mohammad Hossein Kahaei","doi":"10.1016/j.image.2025.117468","DOIUrl":"10.1016/j.image.2025.117468","url":null,"abstract":"<div><div>The effect of noise on the Inverse Synthetic Aperture Radar (ISAR) with sparse apertures is challenging for image reconstruction with high resolution at low Signal-to-Noise Ratios (SNRs). It is well-known that the image resolution is affected by the bandwidth of the transmitted signal and the Coherent Processing Interval (CPI) in two dimensions, range and azimuth, respectively. To reduce the noise effect and thus increase the two-dimensional resolution of Unmanned Aerial Vehicles (UAVs) images, we propose the Fast Reweighted Atomic Norm Denoising (FRAND) algorithm by incorporating the weighted atomic norm minimization. To solve the problem, the Two-Dimensional Alternating Direction Method of Multipliers (2D-ADMM) algorithm is developed to speed up the implementation procedure. Assuming sparse apertures for ISAR images of UAVs, we compare the proposed method with the MUltiple SIgnal Classification (MUSIC), Cadzow, and <span><math><msub><mrow><mi>SL</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span> methods in different SNRs. Simulation results show the superiority of FRAND at low SNRs based on the Mean-Square Error (MSE), Peak Signal-to-Noise ratio (PSNR), and Structural Similarity Index Measure (SSIM) criteria.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117468"},"PeriodicalIF":2.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video object segmentation based on feature compression and attention correction 基于特征压缩和注意校正的视频目标分割
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-24 DOI: 10.1016/j.image.2025.117456
Zhiqiang Hou, Jiale Dong, Chenxu Wang, Sugang Ma, Wangsheng Yu, Yuncheng Wang
The video object segmentation algorithm based on memory networks stores the information of the target object through the maintained external memory inventory. As the segmentation progresses, the size of the memory inventory will continue to increase, leading to redundancy of feature information and affecting the execution efficiency of the algorithm. In addition, the key value pairs stored in the memory library are subjected to channel dimension reduction using standard convolution, resulting in insufficient representation ability of target object features. In response to the above issues, this chapter proposes a video object segmentation algorithm based on feature compression and attention correction, constructing a reliable and effective memory library to ensure efficient storage and updating of target object information, thereby reducing computational complexity and storage consumption. A dual attention mechanism based on spatial and channel dimensions was proposed to correct feature information and enhance the representation ability of features. A large number of experiments have shown that the proposed algorithm demonstrates reliable competitiveness compared to other mainstream algorithms in recent years.
基于记忆网络的视频对象分割算法通过维护的外部存储器库存存储目标对象的信息。随着分割的进行,内存库存的大小会不断增加,导致特征信息的冗余,影响算法的执行效率。此外,存储在内存库中的键值对使用标准卷积进行通道降维,导致目标对象特征的表示能力不足。针对上述问题,本章提出了一种基于特征压缩和注意校正的视频对象分割算法,构建了一个可靠有效的内存库,保证了目标对象信息的高效存储和更新,从而降低了计算复杂度和存储消耗。提出了一种基于空间维度和通道维度的双注意机制来校正特征信息,增强特征表征能力。大量实验表明,与近年来的主流算法相比,本文提出的算法具有可靠的竞争力。
{"title":"Video object segmentation based on feature compression and attention correction","authors":"Zhiqiang Hou,&nbsp;Jiale Dong,&nbsp;Chenxu Wang,&nbsp;Sugang Ma,&nbsp;Wangsheng Yu,&nbsp;Yuncheng Wang","doi":"10.1016/j.image.2025.117456","DOIUrl":"10.1016/j.image.2025.117456","url":null,"abstract":"<div><div>The video object segmentation algorithm based on memory networks stores the information of the target object through the maintained external memory inventory. As the segmentation progresses, the size of the memory inventory will continue to increase, leading to redundancy of feature information and affecting the execution efficiency of the algorithm. In addition, the key value pairs stored in the memory library are subjected to channel dimension reduction using standard convolution, resulting in insufficient representation ability of target object features. In response to the above issues, this chapter proposes a video object segmentation algorithm based on feature compression and attention correction, constructing a reliable and effective memory library to ensure efficient storage and updating of target object information, thereby reducing computational complexity and storage consumption. A dual attention mechanism based on spatial and channel dimensions was proposed to correct feature information and enhance the representation ability of features. A large number of experiments have shown that the proposed algorithm demonstrates reliable competitiveness compared to other mainstream algorithms in recent years.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117456"},"PeriodicalIF":2.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single object tracking based on Spatio-Temporal information 基于时空信息的单目标跟踪
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-24 DOI: 10.1016/j.image.2025.117463
Lixin Wei , Yun Luo , Rongzhe Zhu , Xin Li
To address the challenge of tracking difficulties due to the absence of temporal dynamic information and background clutter interference caused by similar backgrounds, similar objects, target occlusion, and illumination changes during target tracking, this paper proposes a single object tracking algorithm based on spatio-temporal information (SST). The algorithm integrates a Temporal Adaptive Module (TAM) into the backbone network to generate a temporal kernel based on feature maps. This endows the network with the capability to model temporal dynamics, effectively utilizing the temporal relationships between frames to handle complex temporal dynamics such as changes in target motion states and environmental conditions. Additionally, to mitigate background clutter interference, the algorithm employs a Mixed Local Channel Attention (MLCA) mechanism, which captures channel and spatial information to focus the network on the target and reduce the impact of interfering information. The proposed algorithm was evaluated on OTB100, LaSOT, and NFS datasets. It achieved an AUC score of 70.7% on OTB, which represents a 1.3% improvement over the baseline tracker. On LaSOT and NFS datasets, it obtained AUC scores of 65.1% and 65.9%, respectively, showing improvements of 0.2% compared to the baseline tracker. The tracking speed exceeds 80fps, and the performance of the SST algorithm has been verified on self-made videos. The code is available at https://github.com/xuexiaodemenggubao/sst.
针对目标跟踪过程中由于背景相似、目标相似、目标遮挡、光照变化等原因导致的时间动态信息缺失和背景杂波干扰等问题,提出了一种基于时空信息(SST)的单目标跟踪算法。该算法将时序自适应模块(TAM)集成到主干网中,基于特征映射生成时序核。这赋予了网络建模时间动态的能力,有效地利用帧之间的时间关系来处理复杂的时间动态,如目标运动状态和环境条件的变化。此外,为了减轻背景杂波干扰,该算法采用了混合本地信道注意(MLCA)机制,该机制捕获信道和空间信息,使网络集中在目标上,减少干扰信息的影响。在OTB100、LaSOT和NFS数据集上对该算法进行了评估。它在OTB上的AUC得分为70.7%,比基线跟踪器提高了1.3%。在LaSOT和NFS数据集上,它的AUC得分分别为65.1%和65.9%,与基线跟踪器相比提高了0.2%。跟踪速度超过80fps,并在自制视频上验证了SST算法的性能。代码可在https://github.com/xuexiaodemenggubao/sst上获得。
{"title":"Single object tracking based on Spatio-Temporal information","authors":"Lixin Wei ,&nbsp;Yun Luo ,&nbsp;Rongzhe Zhu ,&nbsp;Xin Li","doi":"10.1016/j.image.2025.117463","DOIUrl":"10.1016/j.image.2025.117463","url":null,"abstract":"<div><div>To address the challenge of tracking difficulties due to the absence of temporal dynamic information and background clutter interference caused by similar backgrounds, similar objects, target occlusion, and illumination changes during target tracking, this paper proposes a single object tracking algorithm based on spatio-temporal information (SST). The algorithm integrates a Temporal Adaptive Module (TAM) into the backbone network to generate a temporal kernel based on feature maps. This endows the network with the capability to model temporal dynamics, effectively utilizing the temporal relationships between frames to handle complex temporal dynamics such as changes in target motion states and environmental conditions. Additionally, to mitigate background clutter interference, the algorithm employs a Mixed Local Channel Attention (MLCA) mechanism, which captures channel and spatial information to focus the network on the target and reduce the impact of interfering information. The proposed algorithm was evaluated on OTB100, LaSOT, and NFS datasets. It achieved an AUC score of 70.7% on OTB, which represents a 1.3% improvement over the baseline tracker. On LaSOT and NFS datasets, it obtained AUC scores of 65.1% and 65.9%, respectively, showing improvements of 0.2% compared to the baseline tracker. The tracking speed exceeds 80fps, and the performance of the SST algorithm has been verified on self-made videos. The code is available at <span><span>https://github.com/xuexiaodemenggubao/sst</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117463"},"PeriodicalIF":2.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CS-YOLO:A small object detection model based on YOLO for UAV aerial photography CS-YOLO:基于YOLO的无人机航拍小目标检测模型
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-24 DOI: 10.1016/j.image.2025.117460
Rui Fan , Renhao Jiao , Weigui Nan , Haitao Meng , Abin Jiang , Xiaojia Yang , Zhiqiang Zhao , Jin Dang , Zhixue Wang , Yanshan Tian , Baiying Dong , Xiaowei He , Xiaoli Luo
With the rapid development of the UAV industry, the application of object detection technology based on UAV aerial images is becoming more and more extensive. However, the target of UAV aerial image is small, dense and disturbed by complex environment, which makes object detection face great challenges. In order to solve the problems of dense small targets and strong background interference in UAV aerial images, we propose a YOLO-based UAV aerial image detection model-Content-Conscious and Scale-Sensitive (CS-YOLO). Unlike existing YOLO-based approaches, our contribution lies in the joint design of Bottleneck Attention Module-cross-stage partial (BAM-CSP), Multi-Scale Pooling Attention Fusion Module (MPAFM) and Feature Difference Fusion Module (FDFM). The BAM-CSP module significantly enhances the small target feature response by integrating the channel attention mechanism at the bottleneck layer of the cross-stage partial network; the MPAFM module adopts a multi-scale pooling attention fusion architecture, which suppresses complex background interference through parallel pooling and enhances the background perception ability of small targets. The FDFM module captures the information changes during the sampling process through the feature difference fusion mechanism. The Gradient Adaptive-Efficient IoU (GA-EIoU) loss function is introduced to optimize bounding box regression performance by incorporating the EIoU gradient constraint weighting mechanism. Comparative experiments on the VisDrone2019 dataset, CS-YOLO achieves 22.6% mAP@50:95, which is 2.7% higher than YOLO11n; on the HazyDet dataset, CS-YOLO achieved 53.8% mAP@50:95, an increase of 2.8%. CS-YOLO also comprehensively surpasses the existing advanced methods in terms of recall rate and robustness. Meanwhile, we conducted ablation experiments to verify the gain effect of each module on the detection performance. The model effectively solves the technical problems such as dense small targets and strong environmental interference in UAV aerial images, and provides a high-precision, real-time and reliable detection scheme for complex tasks such as UAV inspection. The source code will be available at https://github.com/unscfr/CS-YOLO.
随着无人机产业的快速发展,基于无人机航拍图像的目标检测技术的应用越来越广泛。然而,无人机航拍图像的目标体积小、密度大,且受复杂环境的干扰,使得目标检测面临很大挑战。为了解决无人机航测图像中小目标密集、背景干扰强的问题,提出了一种基于yolo的无人机航测图像检测模型——内容意识和尺度敏感(CS-YOLO)。与现有的基于yolo的方法不同,我们的贡献在于瓶颈注意力模块-跨阶段部分(BAM-CSP),多尺度池注意力融合模块(MPAFM)和特征差异融合模块(FDFM)的联合设计。BAM-CSP模块通过在跨级局部网络的瓶颈层集成信道注意机制,显著增强了小目标特征响应;MPAFM模块采用多尺度池化注意力融合架构,通过并行池化抑制复杂背景干扰,增强小目标的背景感知能力。FDFM模块通过特征差异融合机制捕获采样过程中的信息变化。引入梯度自适应高效IoU (GA-EIoU)损失函数,结合EIoU梯度约束加权机制优化边界盒回归性能。在VisDrone2019数据集上对比实验,CS-YOLO达到22.6% mAP@50:95,比YOLO11n高2.7%;在HazyDet数据集上,CS-YOLO达到53.8% mAP@50:95,提高2.8%。CS-YOLO在召回率和鲁棒性方面也全面超越了现有的先进方法。同时,我们进行了烧蚀实验,验证了各模块的增益效应对检测性能的影响。该模型有效解决了无人机航拍图像中小目标密集、环境干扰强等技术难题,为无人机巡检等复杂任务提供了高精度、实时、可靠的检测方案。源代码可从https://github.com/unscfr/CS-YOLO获得。
{"title":"CS-YOLO:A small object detection model based on YOLO for UAV aerial photography","authors":"Rui Fan ,&nbsp;Renhao Jiao ,&nbsp;Weigui Nan ,&nbsp;Haitao Meng ,&nbsp;Abin Jiang ,&nbsp;Xiaojia Yang ,&nbsp;Zhiqiang Zhao ,&nbsp;Jin Dang ,&nbsp;Zhixue Wang ,&nbsp;Yanshan Tian ,&nbsp;Baiying Dong ,&nbsp;Xiaowei He ,&nbsp;Xiaoli Luo","doi":"10.1016/j.image.2025.117460","DOIUrl":"10.1016/j.image.2025.117460","url":null,"abstract":"<div><div>With the rapid development of the UAV industry, the application of object detection technology based on UAV aerial images is becoming more and more extensive. However, the target of UAV aerial image is small, dense and disturbed by complex environment, which makes object detection face great challenges. In order to solve the problems of dense small targets and strong background interference in UAV aerial images, we propose a YOLO-based UAV aerial image detection model-Content-Conscious and Scale-Sensitive (CS-YOLO). Unlike existing YOLO-based approaches, our contribution lies in the joint design of Bottleneck Attention Module-cross-stage partial (BAM-CSP), Multi-Scale Pooling Attention Fusion Module (MPAFM) and Feature Difference Fusion Module (FDFM). The BAM-CSP module significantly enhances the small target feature response by integrating the channel attention mechanism at the bottleneck layer of the cross-stage partial network; the MPAFM module adopts a multi-scale pooling attention fusion architecture, which suppresses complex background interference through parallel pooling and enhances the background perception ability of small targets. The FDFM module captures the information changes during the sampling process through the feature difference fusion mechanism. The Gradient Adaptive-Efficient IoU (GA-EIoU) loss function is introduced to optimize bounding box regression performance by incorporating the EIoU gradient constraint weighting mechanism. Comparative experiments on the VisDrone2019 dataset, CS-YOLO achieves 22.6% mAP@50:95, which is 2.7% higher than YOLO11n; on the HazyDet dataset, CS-YOLO achieved 53.8% mAP@50:95, an increase of 2.8%. CS-YOLO also comprehensively surpasses the existing advanced methods in terms of recall rate and robustness. Meanwhile, we conducted ablation experiments to verify the gain effect of each module on the detection performance. The model effectively solves the technical problems such as dense small targets and strong environmental interference in UAV aerial images, and provides a high-precision, real-time and reliable detection scheme for complex tasks such as UAV inspection. The source code will be available at <span><span>https://github.com/unscfr/CS-YOLO</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117460"},"PeriodicalIF":2.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-stream interaction network with cross-modal contrast distillation for co-salient object detection 基于跨模态对比蒸馏的多流交互网络共显著目标检测
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-23 DOI: 10.1016/j.image.2025.117454
Wujie Zhou , Bingying Wang , Xiena Dong , Caie Xu , Fangfang Qiang
Co-salient object detection is a challenging task. Despite advances in existing detectors, two problems remain unsolved. First, although depth maps complement spatial information, existing methods do not effectively fuse multimodal information, and multiscale features are not aggregated appropriately to predict co-salient maps. Second, existing deep-learning methods usually require large numbers of parameters; thus, model sizes must be reduced while ensuring accuracy to enable them to run on streamlined end devices. We propose a multi-stream interaction cooperative encoder by constructing early fusion branches to improve modal interactions and a two-stage transformer decoder to promote multiscale feature fusion. Finally, a multi-stream interaction network with cross-modal contrast knowledge distillation is proposed to connect student and teacher models to improve the performance of the student model while sustaining low computing requirements and achieving collaborative co-salient detection. Our solution is based on a teacher–student architecture that uses contrastive learning to transfer knowledge between deep networks while enhancing semantic consistency and suppressing noise. We employ cross-modal contrast distillation and attention modules in the encoding and decoding phases, respectively, to enhance the response channel and spatial consistency. In addition, a collaborative contrast-learning module is employed to better convey structural knowledge to help students obtain more accurate group semantic information. Experiments on benchmark datasets show the superior performance of the proposed multi-stream interaction network with cross-modal contrast knowledge distillation in collaborative saliency target detection.
共显著目标检测是一项具有挑战性的任务。尽管现有的探测器取得了进步,但仍有两个问题尚未解决。首先,虽然深度图补充了空间信息,但现有方法不能有效地融合多模态信息,并且不能适当地聚合多尺度特征来预测共显著图。其次,现有的深度学习方法通常需要大量的参数;因此,必须减小模型尺寸,同时确保精度,使它们能够在流线型终端设备上运行。我们提出了一种多流交互协作编码器,通过构建早期融合分支来改善模态交互,并提出了一种两级变压器解码器来促进多尺度特征融合。最后,提出了一种跨模态对比知识蒸馏的多流交互网络,将学生模型和教师模型连接起来,以提高学生模型的性能,同时保持较低的计算需求并实现协同共显著性检测。我们的解决方案基于师生架构,该架构使用对比学习在深度网络之间传递知识,同时增强语义一致性并抑制噪声。我们分别在编码和解码阶段采用了跨模态对比蒸馏和注意模块来增强响应通道和空间一致性。此外,采用协作式对比学习模块,更好地传递结构性知识,帮助学生获得更准确的群体语义信息。在基准数据集上的实验表明,基于跨模态对比知识蒸馏的多流交互网络在协同显著性目标检测中具有优异的性能。
{"title":"Multi-stream interaction network with cross-modal contrast distillation for co-salient object detection","authors":"Wujie Zhou ,&nbsp;Bingying Wang ,&nbsp;Xiena Dong ,&nbsp;Caie Xu ,&nbsp;Fangfang Qiang","doi":"10.1016/j.image.2025.117454","DOIUrl":"10.1016/j.image.2025.117454","url":null,"abstract":"<div><div>Co-salient object detection is a challenging task. Despite advances in existing detectors, two problems remain unsolved. First, although depth maps complement spatial information, existing methods do not effectively fuse multimodal information, and multiscale features are not aggregated appropriately to predict co-salient maps. Second, existing deep-learning methods usually require large numbers of parameters; thus, model sizes must be reduced while ensuring accuracy to enable them to run on streamlined end devices. We propose a multi-stream interaction cooperative encoder by constructing early fusion branches to improve modal interactions and a two-stage transformer decoder to promote multiscale feature fusion. Finally, a multi-stream interaction network with cross-modal contrast knowledge distillation is proposed to connect student and teacher models to improve the performance of the student model while sustaining low computing requirements and achieving collaborative co-salient detection. Our solution is based on a teacher–student architecture that uses contrastive learning to transfer knowledge between deep networks while enhancing semantic consistency and suppressing noise. We employ cross-modal contrast distillation and attention modules in the encoding and decoding phases, respectively, to enhance the response channel and spatial consistency. In addition, a collaborative contrast-learning module is employed to better convey structural knowledge to help students obtain more accurate group semantic information. Experiments on benchmark datasets show the superior performance of the proposed multi-stream interaction network with cross-modal contrast knowledge distillation in collaborative saliency target detection.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117454"},"PeriodicalIF":2.7,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HOICNet: Low-Dose CT image denoising network based on higher-order feature attention mechanism and irregular convolution HOICNet:基于高阶特征注意机制和不规则卷积的低剂量CT图像去噪网络
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-23 DOI: 10.1016/j.image.2025.117457
Aimin Huang , Lina Jia , Beibei Jia , Zhiguo Gui , Jianan Liang
Convolution Neural Networks (CNNs) with attention mechanisms show great potential for improving low-dose computed tomography (LDCT) image quality. However, most of these methods use first-order statistics for channel or space processing, ignoring the higher-order statistics of the channel or space features. In addition, the conventional convolution has limited receptive field and a poor performance on the edge of LDCT images. In this study, we aim to develop a CNN model incorporating higher-order feature attention mechanism that both enlarges the receptive field and clearly recovers edges and details. We propose an LDCT image denoising network named as HOICNet based on a higher-order feature attention mechanism and irregular convolution. Specifically, we first propose a new higher-order feature attention mechanism that utilizes higher-order feature statistics to enhance features in different channels and spatial regions. Second, we propose a new irregular convolutional feature extraction module (ICFE) that contains self-calibrating convolution (SC) and side window convolution (SWC). SC is used to enlarge receptive fields, and SWC is used to improve the edge information in denoised images. Finally, we introduce the contrast regularization mechanism (CRM) with positive and negative samples to bring the denoised image closer and closer to the positive samples while moving away from the negative samples to alleviate the problem of over-smoothing of the denoised images. Our experimental results show that the peak signal-to-noise ratio (PSNR), the structural similarity (SSIM), the root mean square error (RMSE) and the visual information fidelity (VIF) values achieved significant improvements in both the AAPM dataset and the piglet dataset.
具有注意机制的卷积神经网络(cnn)在提高低剂量计算机断层扫描(LDCT)图像质量方面显示出巨大的潜力。然而,这些方法大多使用一阶统计信息进行信道或空间处理,而忽略了信道或空间特征的高阶统计信息。此外,传统的卷积算法在LDCT图像边缘处的接受野有限,性能较差。在本研究中,我们的目标是开发一个包含高阶特征注意机制的CNN模型,该模型既扩大了感受野,又清晰地恢复了边缘和细节。提出了一种基于高阶特征注意机制和不规则卷积的LDCT图像去噪网络HOICNet。具体而言,我们首先提出了一种新的高阶特征注意机制,该机制利用高阶特征统计来增强不同通道和空间区域的特征。其次,我们提出了一种包含自校准卷积(SC)和侧窗卷积(SWC)的不规则卷积特征提取模块(ICFE)。该方法使用SC来扩大接受域,使用SWC来改善去噪图像的边缘信息。最后,我们引入了正样本和负样本的对比正则化机制(CRM),使去噪图像越来越接近正样本,同时远离负样本,以缓解去噪图像的过度平滑问题。实验结果表明,AAPM数据集和仔猪数据集的峰值信噪比(PSNR)、结构相似性(SSIM)、均方根误差(RMSE)和视觉信息保真度(VIF)值均有显著改善。
{"title":"HOICNet: Low-Dose CT image denoising network based on higher-order feature attention mechanism and irregular convolution","authors":"Aimin Huang ,&nbsp;Lina Jia ,&nbsp;Beibei Jia ,&nbsp;Zhiguo Gui ,&nbsp;Jianan Liang","doi":"10.1016/j.image.2025.117457","DOIUrl":"10.1016/j.image.2025.117457","url":null,"abstract":"<div><div>Convolution Neural Networks (CNNs) with attention mechanisms show great potential for improving low-dose computed tomography (LDCT) image quality. However, most of these methods use first-order statistics for channel or space processing, ignoring the higher-order statistics of the channel or space features. In addition, the conventional convolution has limited receptive field and a poor performance on the edge of LDCT images. In this study, we aim to develop a CNN model incorporating higher-order feature attention mechanism that both enlarges the receptive field and clearly recovers edges and details. We propose an LDCT image denoising network named as HOICNet based on a higher-order feature attention mechanism and irregular convolution. Specifically, we first propose a new higher-order feature attention mechanism that utilizes higher-order feature statistics to enhance features in different channels and spatial regions. Second, we propose a new irregular convolutional feature extraction module (ICFE) that contains self-calibrating convolution (SC) and side window convolution (SWC). SC is used to enlarge receptive fields, and SWC is used to improve the edge information in denoised images. Finally, we introduce the contrast regularization mechanism (CRM) with positive and negative samples to bring the denoised image closer and closer to the positive samples while moving away from the negative samples to alleviate the problem of over-smoothing of the denoised images. Our experimental results show that the peak signal-to-noise ratio (PSNR), the structural similarity (SSIM), the root mean square error (RMSE) and the visual information fidelity (VIF) values achieved significant improvements in both the AAPM dataset and the piglet dataset.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117457"},"PeriodicalIF":2.7,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1