首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
MGMSDNet: Multi gradient multi scale attention driven denoiser network MGMSDNet:多梯度多尺度注意力驱动去噪网络
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-27 DOI: 10.1016/j.image.2025.117426
Debashis Das, Suman Kumar Maji
Image denoising is essential in applications such as medical imaging, remote sensing, and photography. Despite advancements in deep learning, denoising models still face key limitations. Most state-of-the-art methods increase network depth to boost performance, leading to higher computational costs, complex training, and diminishing returns. Moreover, the role of gradient information and negative image features in denoising is often overlooked, limiting the ability to capture fine structures. Our observations reveal that excessively deep networks can reduce denoising performance by introducing redundancy and complicating feature extraction. To address this, we propose MGMSDNet, a Gradient-Guided Convolutional Neural Network (CNN) with attention mechanisms that balance denoising performance and computational efficiency. MGMSDNet introduces a unique attention framework that utilizes multidirectional gradients and negative image features separately, enhancing structural preservation and noise suppression. To the best of our knowledge, this study is the first to explore multidirectional gradients for image denoising literature. MGMSDNet surpasses state-of-the-art methods on benchmark datasets, confirmed by quantitative metrics and visual comparisons. Ablation studies highlight the effectiveness of individual network components. For more details and implementation, visit our GitHub repository: MGMSDNet.
图像去噪在医学成像、遥感和摄影等应用中是必不可少的。尽管深度学习取得了进步,但去噪模型仍然面临着关键的局限性。大多数最先进的方法增加网络深度以提高性能,导致更高的计算成本、复杂的训练和递减的回报。此外,梯度信息和负图像特征在去噪中的作用往往被忽视,限制了捕捉精细结构的能力。我们的观察表明,过深的网络会通过引入冗余和复杂的特征提取来降低去噪性能。为了解决这个问题,我们提出了MGMSDNet,一种梯度引导卷积神经网络(CNN),它具有平衡去噪性能和计算效率的注意机制。MGMSDNet引入了一种独特的注意力框架,该框架分别利用多向梯度和负图像特征,增强了结构保存和噪声抑制。据我们所知,本研究是第一个探索多向梯度图像去噪的文献。MGMSDNet在基准数据集上超越了最先进的方法,得到了定量指标和视觉比较的证实。消融研究强调了单个网络组成部分的有效性。要了解更多细节和实现,请访问我们的GitHub存储库:MGMSDNet。
{"title":"MGMSDNet: Multi gradient multi scale attention driven denoiser network","authors":"Debashis Das,&nbsp;Suman Kumar Maji","doi":"10.1016/j.image.2025.117426","DOIUrl":"10.1016/j.image.2025.117426","url":null,"abstract":"<div><div>Image denoising is essential in applications such as medical imaging, remote sensing, and photography. Despite advancements in deep learning, denoising models still face key limitations. Most state-of-the-art methods increase network depth to boost performance, leading to higher computational costs, complex training, and diminishing returns. Moreover, the role of gradient information and negative image features in denoising is often overlooked, limiting the ability to capture fine structures. Our observations reveal that excessively deep networks can reduce denoising performance by introducing redundancy and complicating feature extraction. To address this, we propose MGMSDNet, a Gradient-Guided Convolutional Neural Network (CNN) with attention mechanisms that balance denoising performance and computational efficiency. MGMSDNet introduces a unique attention framework that utilizes multidirectional gradients and negative image features separately, enhancing structural preservation and noise suppression. To the best of our knowledge, this study is the first to explore multidirectional gradients for image denoising literature. MGMSDNet surpasses state-of-the-art methods on benchmark datasets, confirmed by quantitative metrics and visual comparisons. Ablation studies highlight the effectiveness of individual network components. For more details and implementation, visit our GitHub repository: <span><span>MGMSDNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117426"},"PeriodicalIF":2.7,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrast and clustering: Learning neighborhood pair representation for source-free domain adaptation 对比与聚类:学习无源域自适应的邻域对表示
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-24 DOI: 10.1016/j.image.2025.117429
Yuqi Chen , Xiangbin Zhu , Yonggang Li , Yingjian Li , Haojie Fang
Unsupervised domain adaptation aims to address the challenge of classifying data from unlabeled target domains by leveraging source data from different distributions. However, conventional methods often necessitate access to source data, raising concerns about data privacy. In this paper, we tackle a more practical yet challenging scenario where the source domain data is unavailable, and the target domain data remains unlabeled. To address the domain discrepancy problem, we propose a novel approach from the perspective of contrastive learning. Our key idea revolves around learning a domain-invariant feature by: (1) constructing abundant pairs for feature learning by utilizing neighboring samples. (2) Refining negative pairs pool reduces learning confusion; and (3) combining noise-contrastive theory simplifies the function effectively. Through careful ablation studies and extensive experiments on three common benchmarks, VisDA, Office-Home, and Office-31, we demonstrate the superiority of our method over other state-of-the-art works. Our proposed approach not only offers practicality by alleviating the requirement of source domain data but also achieves remarkable performance in handling domain adaptation challenges. The code is available at https://github.com/yukilulu/CaC.
无监督域自适应旨在通过利用来自不同分布的源数据来解决来自未标记目标域的数据分类的挑战。然而,传统方法通常需要访问源数据,这引起了对数据隐私的担忧。在本文中,我们处理了一个更实际但更具挑战性的场景,即源领域数据不可用,而目标领域数据仍然未标记。为了解决领域差异问题,我们从对比学习的角度提出了一种新的方法。我们的核心思想是通过以下方法来学习域不变特征:(1)利用邻近样本构造丰富的特征对进行特征学习。(2)细化负对池,减少学习混淆;(3)结合噪声对比理论有效地简化了函数。通过对三个常见基准(VisDA、Office-Home和Office-31)的仔细研究和广泛实验,我们证明了我们的方法优于其他最先进的工作。该方法不仅减轻了对源域数据的需求,具有实用性,而且在处理域自适应挑战方面取得了显著的效果。代码可在https://github.com/yukilulu/CaC上获得。
{"title":"Contrast and clustering: Learning neighborhood pair representation for source-free domain adaptation","authors":"Yuqi Chen ,&nbsp;Xiangbin Zhu ,&nbsp;Yonggang Li ,&nbsp;Yingjian Li ,&nbsp;Haojie Fang","doi":"10.1016/j.image.2025.117429","DOIUrl":"10.1016/j.image.2025.117429","url":null,"abstract":"<div><div>Unsupervised domain adaptation aims to address the challenge of classifying data from unlabeled target domains by leveraging source data from different distributions. However, conventional methods often necessitate access to source data, raising concerns about data privacy. In this paper, we tackle a more practical yet challenging scenario where the source domain data is unavailable, and the target domain data remains unlabeled. To address the domain discrepancy problem, we propose a novel approach from the perspective of contrastive learning. Our key idea revolves around learning a domain-invariant feature by: (1) constructing abundant pairs for feature learning by utilizing neighboring samples. (2) Refining negative pairs pool reduces learning confusion; and (3) combining noise-contrastive theory simplifies the function effectively. Through careful ablation studies and extensive experiments on three common benchmarks, VisDA, Office-Home, and Office-31, we demonstrate the superiority of our method over other state-of-the-art works. Our proposed approach not only offers practicality by alleviating the requirement of source domain data but also achieves remarkable performance in handling domain adaptation challenges. The code is available at <span><span>https://github.com/yukilulu/CaC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117429"},"PeriodicalIF":2.7,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dark light image recognition technology based on improved ssa and object detection 基于改进ssa和目标检测的暗光图像识别技术
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-22 DOI: 10.1016/j.image.2025.117427
Yuan Xu , Shaobo Cui , Fan Feng , Hao Wang
Images in dark light environments have issues such as low contrast, high noise, and loss of details, which seriously affect the accuracy of target recognition. Based on this, a dark light image recognition technology based on improved squirrel search algorithm and object detection is developed, aiming to improve image quality and recognition accuracy in dark light environments by optimizing image enhancement and object detection algorithms. This study proposes an improved squirrel search algorithm that optimizes the image enhancement process through strategies such as bidirectional search, spiral foraging, and greedy selection. It combines a cyclic pixel adjustment module, a multi branch feature extraction network, and an object detection architecture adapted to low light environments to build a complete dark light image recognition model. The experimental findings denote that the PSNR of the image enhancement algorithm grounded on the improved squirrel search algorithm is 26.74 dB, and the structural similarity is 0.85, which is significantly better than the comparison algorithm. The average accuracy of the object detection part under dark light conditions is 67.9 %, which is better than the comparison algorithm. The ablation experiment verifies the effectiveness of the improved strategy, and the overall class average accuracy of the complete model under extreme low light conditions is 66.4 %. The proposed recognition model performs well in dark light image enhancement and object detection tasks, combining high accuracy and robustness, providing an effective solution for image recognition under complex lighting conditions.
暗光环境下的图像存在对比度低、噪声大、细节缺失等问题,严重影响目标识别的准确性。在此基础上,开发了一种基于改进松鼠搜索算法和目标检测的暗光图像识别技术,旨在通过优化图像增强和目标检测算法,提高暗光环境下的图像质量和识别精度。本文提出了一种改进的松鼠搜索算法,通过双向搜索、螺旋觅食和贪婪选择等策略优化图像增强过程。结合循环像素调整模块、多分支特征提取网络和适应弱光环境的目标检测架构,构建完整的暗光图像识别模型。实验结果表明,基于改进松鼠搜索算法的图像增强算法的PSNR为26.74 dB,结构相似度为0.85,明显优于对比算法。暗光照条件下目标检测部分的平均准确率为67.9%,优于对比算法。烧蚀实验验证了改进策略的有效性,在极弱光条件下,完整模型的总体类平均精度为66.4%。该识别模型在暗光图像增强和目标检测任务中表现良好,结合了高精度和鲁棒性,为复杂光照条件下的图像识别提供了有效的解决方案。
{"title":"Dark light image recognition technology based on improved ssa and object detection","authors":"Yuan Xu ,&nbsp;Shaobo Cui ,&nbsp;Fan Feng ,&nbsp;Hao Wang","doi":"10.1016/j.image.2025.117427","DOIUrl":"10.1016/j.image.2025.117427","url":null,"abstract":"<div><div>Images in dark light environments have issues such as low contrast, high noise, and loss of details, which seriously affect the accuracy of target recognition. Based on this, a dark light image recognition technology based on improved squirrel search algorithm and object detection is developed, aiming to improve image quality and recognition accuracy in dark light environments by optimizing image enhancement and object detection algorithms. This study proposes an improved squirrel search algorithm that optimizes the image enhancement process through strategies such as bidirectional search, spiral foraging, and greedy selection. It combines a cyclic pixel adjustment module, a multi branch feature extraction network, and an object detection architecture adapted to low light environments to build a complete dark light image recognition model. The experimental findings denote that the PSNR of the image enhancement algorithm grounded on the improved squirrel search algorithm is 26.74 dB, and the structural similarity is 0.85, which is significantly better than the comparison algorithm. The average accuracy of the object detection part under dark light conditions is 67.9 %, which is better than the comparison algorithm. The ablation experiment verifies the effectiveness of the improved strategy, and the overall class average accuracy of the complete model under extreme low light conditions is 66.4 %. The proposed recognition model performs well in dark light image enhancement and object detection tasks, combining high accuracy and robustness, providing an effective solution for image recognition under complex lighting conditions.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117427"},"PeriodicalIF":2.7,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distance self-adaptive fuzzy c-means and its application to image segmentation 距离自适应模糊c均值及其在图像分割中的应用
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-13 DOI: 10.1016/j.image.2025.117424
Shuaizheng Chen , Chaolu Feng , Dongxiu Li , Zijian Bian , Wei Li , Dazhe Zhao
Size sensitivity is one of the primary disadvantages of fuzzy c-means (FCM). Existing improvements assume that clusters with more samples are larger than clusters with less samples in terms of the count of the cluster samples. An obvious counter-example is that samples with the same spatial coordinates do not increase territory of the cluster they belong to. Given a spatial transformation, we first define the territory size of each cluster in the space. We then propose a distance normalized FCM (DFCM) where distances of samples to each cluster centre are self-adaptively adjusted based on the territory size of the cluster to prevent cluster centres deviating from smaller clusters to adjacent larger clusters. In addition, FCM is taken as a balance term, which is gradually weakening during the iteration, to the objective function of DFCM to guide DFCM from being trapped by a local optimal solution. We specialize the spatial transformation as kernel functions to address the non-linear separability problem in the Hilbert space. As kernel selection is still an open problem, we propose a homogeneous and an inhomogeneous sample partitions to construct an undirected graph and specialize DFCM in the graph space. We finally evaluate and compare DFCM and its specializations with 12 FCM based methods on 6 datasets in terms of 8 metrics. The performance generally improves by 2%–5%. Initialization sensitivity, parameter effects and settings, robustness to noise and bias field, limitations, and future works are also discussed. Results show that the proposed method is more robust than competitors.
大小敏感性是模糊c均值算法的主要缺点之一。现有的改进假设,就样本数量而言,样本数量多的集群比样本数量少的集群更大。一个明显的反例是,具有相同空间坐标的样本不会增加它们所属集群的领土。给定一个空间变换,我们首先定义空间中每个集群的领土大小。然后,我们提出了一种距离归一化FCM (DFCM),其中样本到每个聚类中心的距离根据聚类的领土大小自适应调整,以防止聚类中心从较小的聚类偏离到相邻的较大聚类。此外,将FCM作为DFCM目标函数的平衡项,使其在迭代过程中逐渐减弱,以避免DFCM陷入局部最优解的困境。为了解决Hilbert空间中的非线性可分性问题,我们将空间变换专门化为核函数。由于核选择仍然是一个开放的问题,我们提出了齐次和非齐次样本分区来构造无向图,并在图空间中专门化DFCM。最后,我们根据8个指标对6个数据集的12种基于FCM的方法评估和比较了DFCM及其专门化。性能一般提高2%-5%。讨论了初始化灵敏度、参数影响和设置、对噪声和偏置场的鲁棒性、局限性和未来的工作。结果表明,该方法具有较强的鲁棒性。
{"title":"Distance self-adaptive fuzzy c-means and its application to image segmentation","authors":"Shuaizheng Chen ,&nbsp;Chaolu Feng ,&nbsp;Dongxiu Li ,&nbsp;Zijian Bian ,&nbsp;Wei Li ,&nbsp;Dazhe Zhao","doi":"10.1016/j.image.2025.117424","DOIUrl":"10.1016/j.image.2025.117424","url":null,"abstract":"<div><div>Size sensitivity is one of the primary disadvantages of fuzzy c-means (FCM). Existing improvements assume that clusters with more samples are larger than clusters with less samples in terms of the count of the cluster samples. An obvious counter-example is that samples with the same spatial coordinates do not increase territory of the cluster they belong to. Given a spatial transformation, we first define the territory size of each cluster in the space. We then propose a distance normalized FCM (DFCM) where distances of samples to each cluster centre are self-adaptively adjusted based on the territory size of the cluster to prevent cluster centres deviating from smaller clusters to adjacent larger clusters. In addition, FCM is taken as a balance term, which is gradually weakening during the iteration, to the objective function of DFCM to guide DFCM from being trapped by a local optimal solution. We specialize the spatial transformation as kernel functions to address the non-linear separability problem in the Hilbert space. As kernel selection is still an open problem, we propose a homogeneous and an inhomogeneous sample partitions to construct an undirected graph and specialize DFCM in the graph space. We finally evaluate and compare DFCM and its specializations with 12 FCM based methods on 6 datasets in terms of 8 metrics. The performance generally improves by 2%–5%. Initialization sensitivity, parameter effects and settings, robustness to noise and bias field, limitations, and future works are also discussed. Results show that the proposed method is more robust than competitors.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117424"},"PeriodicalIF":2.7,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on an adaptive robust ellipse fitting method integrating multiple weight strategies 多权策略集成的自适应鲁棒椭圆拟合方法研究
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-11 DOI: 10.1016/j.image.2025.117425
Bo-Lin Jian , Chao-Chung Peng , Wen-Lin Chu
This study aims to develop a robust ellipse fitting method that maintains computational efficiency comparable to Direct Least Squares (DLS) while achieving superior noise resistance for image processing and shape recognition applications. We propose Adaptive Dual-Robust Ellipse Fitting (ADREF), which integrates three novel contributions: (1) adaptive candidate clustering using k-nearest neighbor distances to adjust epsilon values, eliminating fixed parameter dependency automatically; (2) dual-robust weighting strategy combining Huber and Tukey Biweight functions with mixing coefficient α = 0.5, enhanced by image grayscale intensity information; and (3) nonlinear minimization correction based on geometric distance for improved parameter accuracy. Experimental validation across noise levels (σ = 1 to 11) demonstrates ADREF's superior performance over traditional methods. Compared to DLS, Weighted Least Squares (WLS), and Iteratively Reweighted Least Squares (IRLS), ADREF achieves the lowest errors in center coordinates, axis lengths, and rotation angles. Under high-noise conditions (σ = 11), ADREF maintains stable performance while other methods deteriorate significantly. Multi-ellipse experiments confirm automatic identification and fitting capabilities without pre-specifying target numbers. ADREF provides a breakthrough solution combining computational efficiency with robust noise resistance, particularly suitable for high-noise industrial vision applications, medical image analysis, and pattern recognition systems requiring accuracy and reliability.
本研究旨在开发一种鲁棒的椭圆拟合方法,该方法在保持与直接最小二乘(DLS)相当的计算效率的同时,在图像处理和形状识别应用中具有优异的抗噪声性。我们提出了自适应双鲁棒椭圆拟合(ADREF),它集成了三个新贡献:(1)自适应候选聚类使用k-最近邻距离来调整epsilon值,自动消除固定参数依赖性;(2)结合混合系数α = 0.5的Huber和Tukey双权重函数,利用图像灰度强度信息增强的双鲁棒加权策略;(3)基于几何距离的非线性最小化校正,提高参数精度。跨噪声水平(σ = 1 ~ 11)的实验验证表明,ADREF方法优于传统方法。与DLS、加权最小二乘(Weighted Least Squares, WLS)和迭代再加权最小二乘(iterative Reweighted Least Squares, IRLS)相比,ADREF在中心坐标、轴线长度和旋转角度上的误差最小。在高噪声条件下(σ = 11), ADREF的性能保持稳定,而其他方法的性能明显下降。多椭圆实验验证了自动识别和拟合的能力,无需预先指定目标数。ADREF提供了一种突破性的解决方案,结合了计算效率和强大的抗噪性,特别适用于高噪声工业视觉应用,医学图像分析以及需要准确性和可靠性的模式识别系统。
{"title":"Research on an adaptive robust ellipse fitting method integrating multiple weight strategies","authors":"Bo-Lin Jian ,&nbsp;Chao-Chung Peng ,&nbsp;Wen-Lin Chu","doi":"10.1016/j.image.2025.117425","DOIUrl":"10.1016/j.image.2025.117425","url":null,"abstract":"<div><div>This study aims to develop a robust ellipse fitting method that maintains computational efficiency comparable to Direct Least Squares (DLS) while achieving superior noise resistance for image processing and shape recognition applications. We propose Adaptive Dual-Robust Ellipse Fitting (ADREF), which integrates three novel contributions: (1) adaptive candidate clustering using k-nearest neighbor distances to adjust epsilon values, eliminating fixed parameter dependency automatically; (2) dual-robust weighting strategy combining Huber and Tukey Biweight functions with mixing coefficient α = 0.5, enhanced by image grayscale intensity information; and (3) nonlinear minimization correction based on geometric distance for improved parameter accuracy. Experimental validation across noise levels (σ = 1 to 11) demonstrates ADREF's superior performance over traditional methods. Compared to DLS, Weighted Least Squares (WLS), and Iteratively Reweighted Least Squares (IRLS), ADREF achieves the lowest errors in center coordinates, axis lengths, and rotation angles. Under high-noise conditions (σ = 11), ADREF maintains stable performance while other methods deteriorate significantly. Multi-ellipse experiments confirm automatic identification and fitting capabilities without pre-specifying target numbers. ADREF provides a breakthrough solution combining computational efficiency with robust noise resistance, particularly suitable for high-noise industrial vision applications, medical image analysis, and pattern recognition systems requiring accuracy and reliability.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117425"},"PeriodicalIF":2.7,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YUW-Net: A Y-shaped deep network for enhancement of Under-water images YUW-Net:用于增强水下图像的y形深度网络
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-11 DOI: 10.1016/j.image.2025.117422
Rupankar Das, Chandrajit Choudhury
One of the main challenges in underwater imaging is the poor quality caused by the underwater propagation of light. In this work, this distortion is analyzed separately along the color and illumination channels, and accordingly a novel Convolution neural network (CNN) architecture is proposed for the restoration of underwater images. The proposed method is independent of any approximated physical model or parameter-based model for underwater image formation. In the proposed approach the distortion is perceived as changes in the color and illumination distribution of the image, majorly in terms of the first and second moments. Extensive experimentation are done for evaluating the proposed network over real underwater images alone and in combination with the synthetically generated images. To prove the claims, a thorough comparison with the state-of-the-art works is presented, in qualitative as well as quantitative manner. The proposed network is found to outperform the state-of-the-art methods. For easy reproducibility of the work, on acceptance, the implementation of this work will be shared in https://github.com/ChandrajitChoudhury/YUWNET.
水下成像面临的主要挑战之一是由于光在水下传播而导致的成像质量差。在这项工作中,这种失真分别沿着颜色和照明通道进行分析,并据此提出了一种新的卷积神经网络(CNN)架构用于水下图像的恢复。该方法不依赖于任何近似物理模型或基于参数的水下图像形成模型。在提出的方法中,畸变被认为是图像的颜色和照明分布的变化,主要是在第一和第二矩方面。为了在真实的水下图像上单独评估所提出的网络,并与合成的图像相结合,进行了大量的实验。为了证明这些说法,本文以定性和定量的方式与最先进的作品进行了彻底的比较。发现所提出的网络优于最先进的方法。为了便于工作的再现,在接受后,本工作的实现将在https://github.com/ChandrajitChoudhury/YUWNET上共享。
{"title":"YUW-Net: A Y-shaped deep network for enhancement of Under-water images","authors":"Rupankar Das,&nbsp;Chandrajit Choudhury","doi":"10.1016/j.image.2025.117422","DOIUrl":"10.1016/j.image.2025.117422","url":null,"abstract":"<div><div>One of the main challenges in underwater imaging is the poor quality caused by the underwater propagation of light. In this work, this distortion is analyzed separately along the color and illumination channels, and accordingly a novel Convolution neural network (CNN) architecture is proposed for the restoration of underwater images. The proposed method is independent of any approximated physical model or parameter-based model for underwater image formation. In the proposed approach the distortion is perceived as changes in the color and illumination distribution of the image, majorly in terms of the first and second moments. Extensive experimentation are done for evaluating the proposed network over real underwater images alone and in combination with the synthetically generated images. To prove the claims, a thorough comparison with the state-of-the-art works is presented, in qualitative as well as quantitative manner. The proposed network is found to outperform the state-of-the-art methods. For easy reproducibility of the work, on acceptance, the implementation of this work will be shared in <span><span>https://github.com/ChandrajitChoudhury/YUWNET</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117422"},"PeriodicalIF":2.7,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MTDNet: A crowd counting network based on a multiscale transformer and dilated convolution MTDNet:一个基于多尺度变压器和扩展卷积的人群计数网络
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-10 DOI: 10.1016/j.image.2025.117423
Chongle Peng , Qingbing Sang , Xiaojun Wu , Zhaohong Deng , Lixiong Liu
Crowd counting methods based on Convolutional Neural Networks (CNNs) often struggle to model contextual features effectively due to their limited feature extraction scope. In contrast, the Transformer model, through its global self-attention mechanism, can consider information from all positions simultaneously, thereby modeling contextual relationships more comprehensively and enhancing the model's ability to understand sequences. Building on this, we propose a crowd counting network that leverages multi-scale features extracted by Transformer and dilated convolutions. Specifically, we utilize Vision Transformer to capture global information and integrate low-level detail features with high-level semantic features, enabling the model to effectively extract semantic features with global contextual information. Additionally, we design a multi-branch dilated convolution regression module, composed of dilated convolutions, to process feature maps generated during the encoding phase, which can effectively produce predicted density maps and yield accurate regression results. Furthermore, we introduce a novel loss function for our pipeline, which enhances the model's fitting capability and results in smoother density maps.
Extensive experiments on four public crowd counting datasets demonstrate that our method achieves state-of-the-art results on several benchmarks. Particularly on the ShanghaiTech dataset, MTDNet achieves the most advanced results, with the MSE improving by 2.6% compared to the next best model.
基于卷积神经网络(cnn)的人群计数方法由于其特征提取范围有限,往往难以有效地建模上下文特征。相比之下,Transformer模型通过其全局自关注机制,可以同时考虑来自所有位置的信息,从而更全面地建模上下文关系,并增强模型理解序列的能力。在此基础上,我们提出了一个利用Transformer和扩展卷积提取的多尺度特征的人群计数网络。具体而言,我们利用Vision Transformer捕获全局信息,并将低级细节特征与高级语义特征相结合,使模型能够有效地提取具有全局上下文信息的语义特征。此外,我们设计了一个由扩展卷积组成的多分支扩展卷积回归模块,对编码阶段生成的特征图进行处理,可以有效地生成预测的密度图,并得到准确的回归结果。此外,我们为我们的管道引入了一种新的损失函数,提高了模型的拟合能力,并得到了更平滑的密度图。在四个公共人群计数数据集上进行的大量实验表明,我们的方法在几个基准上取得了最先进的结果。特别是在ShanghaiTech数据集上,MTDNet取得了最先进的结果,与次优模型相比,MSE提高了2.6%。
{"title":"MTDNet: A crowd counting network based on a multiscale transformer and dilated convolution","authors":"Chongle Peng ,&nbsp;Qingbing Sang ,&nbsp;Xiaojun Wu ,&nbsp;Zhaohong Deng ,&nbsp;Lixiong Liu","doi":"10.1016/j.image.2025.117423","DOIUrl":"10.1016/j.image.2025.117423","url":null,"abstract":"<div><div>Crowd counting methods based on Convolutional Neural Networks (CNNs) often struggle to model contextual features effectively due to their limited feature extraction scope. In contrast, the Transformer model, through its global self-attention mechanism, can consider information from all positions simultaneously, thereby modeling contextual relationships more comprehensively and enhancing the model's ability to understand sequences. Building on this, we propose a crowd counting network that leverages multi-scale features extracted by Transformer and dilated convolutions. Specifically, we utilize Vision Transformer to capture global information and integrate low-level detail features with high-level semantic features, enabling the model to effectively extract semantic features with global contextual information. Additionally, we design a multi-branch dilated convolution regression module, composed of dilated convolutions, to process feature maps generated during the encoding phase, which can effectively produce predicted density maps and yield accurate regression results. Furthermore, we introduce a novel loss function for our pipeline, which enhances the model's fitting capability and results in smoother density maps.</div><div>Extensive experiments on four public crowd counting datasets demonstrate that our method achieves state-of-the-art results on several benchmarks. Particularly on the ShanghaiTech dataset, MTDNet achieves the most advanced results, with the MSE improving by 2.6% compared to the next best model.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117423"},"PeriodicalIF":2.7,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An LDCT image denoising model based on dual-path attention 一种基于双路径关注的LDCT图像去噪模型
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-09 DOI: 10.1016/j.image.2025.117412
Xiaodong Guo , Xiaoyan Chang , Lifang Wang , Rongguo Zhang , Lihua Hu
Existing denoising methods for low-dose computed tomography (LDCT) images generally suffer from an imbalance between global correlation capture and local detail preservation, leading to blurring of the global image structure or loss of edge textures, which seriously impairs the accuracy of clinical diagnosis. To address this limitation, a dual-path attention denoising model is proposed. An LSTM-based attention module (LBAM) is introduced: it captures long-range sequence dependencies through a gating mechanism, focuses on key diagnostic information using residual connections, and enhances the modeling of cross-regional continuous morphology in the sequence dimension. Meanwhile, a multi-scale attention module (MSAM) is embedded in the U-Net architecture, which performs parallel modeling via channel-spatial attention and self-attention to achieve collaborative optimization of local detail enhancement and global correlation capture. The results show that the method can effectively balance the integrity of the global structure and the clarity of local details, significantly improving the quality of LDCT images. The PSNR and SSIM values in the Mayo dataset were 31.6725 and 0.8820, respectively, indicating the effectiveness of the method in removing noise and preserving image details.
现有的低剂量计算机断层扫描(LDCT)图像去噪方法普遍存在全局相关性捕获与局部细节保留不平衡的问题,导致图像整体结构模糊或边缘纹理丢失,严重影响临床诊断的准确性。为了解决这一问题,提出了一种双路径注意力去噪模型。提出了一种基于lstm的注意力模块(LBAM),该模块通过门控机制捕获远程序列依赖关系,利用残差连接关注关键诊断信息,并在序列维度上增强了跨区域连续形态的建模。同时,在U-Net体系结构中嵌入多尺度注意模块(MSAM),通过通道空间注意和自注意并行建模,实现局部细节增强和全局相关性捕获的协同优化。结果表明,该方法能够有效地平衡全局结构的完整性和局部细节的清晰度,显著提高LDCT图像的质量。Mayo数据集的PSNR和SSIM值分别为31.6725和0.8820,表明该方法在去除噪声和保留图像细节方面是有效的。
{"title":"An LDCT image denoising model based on dual-path attention","authors":"Xiaodong Guo ,&nbsp;Xiaoyan Chang ,&nbsp;Lifang Wang ,&nbsp;Rongguo Zhang ,&nbsp;Lihua Hu","doi":"10.1016/j.image.2025.117412","DOIUrl":"10.1016/j.image.2025.117412","url":null,"abstract":"<div><div>Existing denoising methods for low-dose computed tomography (LDCT) images generally suffer from an imbalance between global correlation capture and local detail preservation, leading to blurring of the global image structure or loss of edge textures, which seriously impairs the accuracy of clinical diagnosis. To address this limitation, a dual-path attention denoising model is proposed. An LSTM-based attention module (LBAM) is introduced: it captures long-range sequence dependencies through a gating mechanism, focuses on key diagnostic information using residual connections, and enhances the modeling of cross-regional continuous morphology in the sequence dimension. Meanwhile, a multi-scale attention module (MSAM) is embedded in the U-Net architecture, which performs parallel modeling via channel-spatial attention and self-attention to achieve collaborative optimization of local detail enhancement and global correlation capture. The results show that the method can effectively balance the integrity of the global structure and the clarity of local details, significantly improving the quality of LDCT images. The PSNR and SSIM values in the Mayo dataset were 31.6725 and 0.8820, respectively, indicating the effectiveness of the method in removing noise and preserving image details.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117412"},"PeriodicalIF":2.7,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new framework for realizing fraction-order filters with robust performance 一种实现分数阶滤波器鲁棒性的新框架
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-06 DOI: 10.1016/j.image.2025.117411
Yiguang Liu
In signal or image fields, how to design the filters not only revealing the high frequency information (such as image edges) but also subduing noise influence concurrently, is very important. To construct the filters having this performance, a new framework is given to realize filters with fractional orders. From the framework, a filter with m+1 entries, actually performs (mα)-order difference with α a complex number. When the absolute value of the real part of α is larger than a threshold, some entry of the filter is almost 2-periodic variable, and the others are close to zero. This sheds light on the fact, why a finite long filter cannot perform filtering with too high orders. By virtue of heuristic ways, an applicable restriction for the real part of α is given, [1,1]. Experimental results on synthetic data sequences indicate that the newly defined filters can disclose the local extreme points robustly while the Laplacian of Gaussian (LoG) operator completely fails, and the results on image benchmarks demonstrate that, the newly given filters can filter out more useful information details such as detecting edges while subduing noises, than the conventional integer-order difference filters such as LoG and Laplacian (Lap) operator. It is hopeful that more effective and practical filters can be constructed by the new framework.
在信号或图像领域,如何设计既能显示高频信息(如图像边缘)又能抑制噪声影响的滤波器是非常重要的。为了构造具有这种性能的滤波器,给出了一种新的实现分数阶滤波器的框架。从框架来看,一个有m+1个条目的过滤器,实际上执行(mα)与复数α的阶差。当α实部的绝对值大于阈值时,滤波器的某些项几乎是2周期变量,而其他项则接近于零。这揭示了为什么有限长滤波器不能执行高阶滤波的事实。利用启发式方法,给出了α实部的一个适用约束[−1,1]。在合成数据序列上的实验结果表明,在LoG算子完全失效的情况下,新定义的滤波器可以鲁棒地揭示局部极值点;图像基准测试结果表明,与LoG和Lap算子等常规整数阶差分滤波器相比,新定义的滤波器可以在抑制噪声的同时过滤出更多有用的信息细节,如边缘检测。希望通过新的框架可以构建更有效和实用的过滤器。
{"title":"A new framework for realizing fraction-order filters with robust performance","authors":"Yiguang Liu","doi":"10.1016/j.image.2025.117411","DOIUrl":"10.1016/j.image.2025.117411","url":null,"abstract":"<div><div>In signal or image fields, how to design the filters not only revealing the high frequency information (such as image edges) but also subduing noise influence concurrently, is very important. To construct the filters having this performance, a new framework is given to realize filters with fractional orders. From the framework, a filter with <span><math><mrow><mi>m</mi><mo>+</mo><mn>1</mn></mrow></math></span> entries, actually performs <span><math><mrow><mo>(</mo><mi>m</mi><mi>α</mi><mo>)</mo></mrow></math></span>-order difference with <span><math><mi>α</mi></math></span> a complex number. When the absolute value of the real part of <span><math><mi>α</mi></math></span> is larger than a threshold, some entry of the filter is almost 2-periodic variable, and the others are close to zero. This sheds light on the fact, why a finite long filter cannot perform filtering with too high orders. By virtue of heuristic ways, an applicable restriction for the real part of <span><math><mi>α</mi></math></span> is given, <span><math><mrow><mo>[</mo><mo>−</mo><mn>1</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></math></span>. Experimental results on synthetic data sequences indicate that the newly defined filters can disclose the local extreme points robustly while the Laplacian of Gaussian (LoG) operator completely fails, and the results on image benchmarks demonstrate that, the newly given filters can filter out more useful information details such as detecting edges while subduing noises, than the conventional integer-order difference filters such as LoG and Laplacian (Lap) operator. It is hopeful that more effective and practical filters can be constructed by the new framework.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117411"},"PeriodicalIF":2.7,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAPS-MER: Facial action position and semantic based interactive fusion for micro-expression recognition 基于面部动作位置和语义的微表情识别交互融合
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1016/j.image.2025.117410
Gang Yan , Yubo He , Zian Liu , Yuqiang Guo , Shixin Cen
Micro-expression (ME) is a spontaneous facial motion that plays a crucial role in revealing the true human emotions. However, there is a complex correlation between the local motion position of ME and the semantics of subtle actions, which poses a great difficulty in establishing a robust dependency to achieve micro-expression recognition (MER). In this work, we propose a facial action position and semantic interaction fusion micro-expression recognition network (FAPS-MER), which consists of a dual-branch backbone network, i.e., the upper branch consists of a Position Awareness Module (PAM), while the lower branch consists of a Semantic Extraction Module (SEM), and a feature interaction strategy. Specifically, PAM uses position awareness attention to extract the ME’s motion position information while fusing semantic action information from the lower branch to capture global motion dependencies. SEM extracts facial action semantics while using action unit (AU) prediction as an auxiliary task, helping to establish a correlation between muscle motion and action semantics. Then, a hierarchical correlation inference strategy is proposed, which fuses position awareness attention across different levels within the SEM, thereby ensuring a continuous focus on the semantic information of actions at crucial facial positions. Finally, an interaction strategy leveraging complementary features is introduced to hierarchically fuse dual-branch features, further advancing the learning of contextual relationships between facial motion positions and action semantics. Through extensive experiments and evaluations on the benchmark datasets CASMEII, SAMM and MMEW, the results demonstrate that we are competitive with state-of-the-art methods.
微表情(ME)是一种自发的面部运动,在揭示人类真实情绪方面起着至关重要的作用。然而,微表情局部运动位置与细微动作语义之间存在复杂的相关性,这给建立鲁棒依赖关系实现微表情识别(MER)带来了很大困难。在这项工作中,我们提出了一个面部动作位置和语义交互融合的微表情识别网络(FAPS-MER),该网络由一个双分支骨干网络组成,即上分支由一个位置感知模块(PAM)组成,下分支由一个语义提取模块(SEM)组成,以及一个特征交互策略。具体来说,PAM使用位置感知注意提取ME的运动位置信息,同时融合来自较低分支的语义动作信息以捕获全局运动依赖关系。扫描电镜提取面部动作语义,同时使用动作单元预测作为辅助任务,有助于建立肌肉运动与动作语义之间的相关性。然后,提出了一种分层关联推理策略,该策略融合了扫描电镜中不同层次的位置意识注意力,从而确保了对面部关键位置动作语义信息的持续关注。最后,引入了一种利用互补特征的交互策略,分层融合双分支特征,进一步推进面部运动位置和动作语义之间的上下文关系学习。通过对基准数据集CASMEII、SAMM和MMEW的大量实验和评估,结果表明我们的方法与最先进的方法具有竞争力。
{"title":"FAPS-MER: Facial action position and semantic based interactive fusion for micro-expression recognition","authors":"Gang Yan ,&nbsp;Yubo He ,&nbsp;Zian Liu ,&nbsp;Yuqiang Guo ,&nbsp;Shixin Cen","doi":"10.1016/j.image.2025.117410","DOIUrl":"10.1016/j.image.2025.117410","url":null,"abstract":"<div><div>Micro-expression (ME) is a spontaneous facial motion that plays a crucial role in revealing the true human emotions. However, there is a complex correlation between the local motion position of ME and the semantics of subtle actions, which poses a great difficulty in establishing a robust dependency to achieve micro-expression recognition (MER). In this work, we propose a facial action position and semantic interaction fusion micro-expression recognition network (FAPS-MER), which consists of a dual-branch backbone network, i.e., the upper branch consists of a Position Awareness Module (PAM), while the lower branch consists of a Semantic Extraction Module (SEM), and a feature interaction strategy. Specifically, PAM uses position awareness attention to extract the ME’s motion position information while fusing semantic action information from the lower branch to capture global motion dependencies. SEM extracts facial action semantics while using action unit (AU) prediction as an auxiliary task, helping to establish a correlation between muscle motion and action semantics. Then, a hierarchical correlation inference strategy is proposed, which fuses position awareness attention across different levels within the SEM, thereby ensuring a continuous focus on the semantic information of actions at crucial facial positions. Finally, an interaction strategy leveraging complementary features is introduced to hierarchically fuse dual-branch features, further advancing the learning of contextual relationships between facial motion positions and action semantics. Through extensive experiments and evaluations on the benchmark datasets CASMEII, SAMM and MMEW, the results demonstrate that we are competitive with state-of-the-art methods.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117410"},"PeriodicalIF":2.7,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145270193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1