首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
Distance self-adaptive fuzzy c-means and its application to image segmentation 距离自适应模糊c均值及其在图像分割中的应用
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-13 DOI: 10.1016/j.image.2025.117424
Shuaizheng Chen , Chaolu Feng , Dongxiu Li , Zijian Bian , Wei Li , Dazhe Zhao
Size sensitivity is one of the primary disadvantages of fuzzy c-means (FCM). Existing improvements assume that clusters with more samples are larger than clusters with less samples in terms of the count of the cluster samples. An obvious counter-example is that samples with the same spatial coordinates do not increase territory of the cluster they belong to. Given a spatial transformation, we first define the territory size of each cluster in the space. We then propose a distance normalized FCM (DFCM) where distances of samples to each cluster centre are self-adaptively adjusted based on the territory size of the cluster to prevent cluster centres deviating from smaller clusters to adjacent larger clusters. In addition, FCM is taken as a balance term, which is gradually weakening during the iteration, to the objective function of DFCM to guide DFCM from being trapped by a local optimal solution. We specialize the spatial transformation as kernel functions to address the non-linear separability problem in the Hilbert space. As kernel selection is still an open problem, we propose a homogeneous and an inhomogeneous sample partitions to construct an undirected graph and specialize DFCM in the graph space. We finally evaluate and compare DFCM and its specializations with 12 FCM based methods on 6 datasets in terms of 8 metrics. The performance generally improves by 2%–5%. Initialization sensitivity, parameter effects and settings, robustness to noise and bias field, limitations, and future works are also discussed. Results show that the proposed method is more robust than competitors.
大小敏感性是模糊c均值算法的主要缺点之一。现有的改进假设,就样本数量而言,样本数量多的集群比样本数量少的集群更大。一个明显的反例是,具有相同空间坐标的样本不会增加它们所属集群的领土。给定一个空间变换,我们首先定义空间中每个集群的领土大小。然后,我们提出了一种距离归一化FCM (DFCM),其中样本到每个聚类中心的距离根据聚类的领土大小自适应调整,以防止聚类中心从较小的聚类偏离到相邻的较大聚类。此外,将FCM作为DFCM目标函数的平衡项,使其在迭代过程中逐渐减弱,以避免DFCM陷入局部最优解的困境。为了解决Hilbert空间中的非线性可分性问题,我们将空间变换专门化为核函数。由于核选择仍然是一个开放的问题,我们提出了齐次和非齐次样本分区来构造无向图,并在图空间中专门化DFCM。最后,我们根据8个指标对6个数据集的12种基于FCM的方法评估和比较了DFCM及其专门化。性能一般提高2%-5%。讨论了初始化灵敏度、参数影响和设置、对噪声和偏置场的鲁棒性、局限性和未来的工作。结果表明,该方法具有较强的鲁棒性。
{"title":"Distance self-adaptive fuzzy c-means and its application to image segmentation","authors":"Shuaizheng Chen ,&nbsp;Chaolu Feng ,&nbsp;Dongxiu Li ,&nbsp;Zijian Bian ,&nbsp;Wei Li ,&nbsp;Dazhe Zhao","doi":"10.1016/j.image.2025.117424","DOIUrl":"10.1016/j.image.2025.117424","url":null,"abstract":"<div><div>Size sensitivity is one of the primary disadvantages of fuzzy c-means (FCM). Existing improvements assume that clusters with more samples are larger than clusters with less samples in terms of the count of the cluster samples. An obvious counter-example is that samples with the same spatial coordinates do not increase territory of the cluster they belong to. Given a spatial transformation, we first define the territory size of each cluster in the space. We then propose a distance normalized FCM (DFCM) where distances of samples to each cluster centre are self-adaptively adjusted based on the territory size of the cluster to prevent cluster centres deviating from smaller clusters to adjacent larger clusters. In addition, FCM is taken as a balance term, which is gradually weakening during the iteration, to the objective function of DFCM to guide DFCM from being trapped by a local optimal solution. We specialize the spatial transformation as kernel functions to address the non-linear separability problem in the Hilbert space. As kernel selection is still an open problem, we propose a homogeneous and an inhomogeneous sample partitions to construct an undirected graph and specialize DFCM in the graph space. We finally evaluate and compare DFCM and its specializations with 12 FCM based methods on 6 datasets in terms of 8 metrics. The performance generally improves by 2%–5%. Initialization sensitivity, parameter effects and settings, robustness to noise and bias field, limitations, and future works are also discussed. Results show that the proposed method is more robust than competitors.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117424"},"PeriodicalIF":2.7,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on an adaptive robust ellipse fitting method integrating multiple weight strategies 多权策略集成的自适应鲁棒椭圆拟合方法研究
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-11 DOI: 10.1016/j.image.2025.117425
Bo-Lin Jian , Chao-Chung Peng , Wen-Lin Chu
This study aims to develop a robust ellipse fitting method that maintains computational efficiency comparable to Direct Least Squares (DLS) while achieving superior noise resistance for image processing and shape recognition applications. We propose Adaptive Dual-Robust Ellipse Fitting (ADREF), which integrates three novel contributions: (1) adaptive candidate clustering using k-nearest neighbor distances to adjust epsilon values, eliminating fixed parameter dependency automatically; (2) dual-robust weighting strategy combining Huber and Tukey Biweight functions with mixing coefficient α = 0.5, enhanced by image grayscale intensity information; and (3) nonlinear minimization correction based on geometric distance for improved parameter accuracy. Experimental validation across noise levels (σ = 1 to 11) demonstrates ADREF's superior performance over traditional methods. Compared to DLS, Weighted Least Squares (WLS), and Iteratively Reweighted Least Squares (IRLS), ADREF achieves the lowest errors in center coordinates, axis lengths, and rotation angles. Under high-noise conditions (σ = 11), ADREF maintains stable performance while other methods deteriorate significantly. Multi-ellipse experiments confirm automatic identification and fitting capabilities without pre-specifying target numbers. ADREF provides a breakthrough solution combining computational efficiency with robust noise resistance, particularly suitable for high-noise industrial vision applications, medical image analysis, and pattern recognition systems requiring accuracy and reliability.
本研究旨在开发一种鲁棒的椭圆拟合方法,该方法在保持与直接最小二乘(DLS)相当的计算效率的同时,在图像处理和形状识别应用中具有优异的抗噪声性。我们提出了自适应双鲁棒椭圆拟合(ADREF),它集成了三个新贡献:(1)自适应候选聚类使用k-最近邻距离来调整epsilon值,自动消除固定参数依赖性;(2)结合混合系数α = 0.5的Huber和Tukey双权重函数,利用图像灰度强度信息增强的双鲁棒加权策略;(3)基于几何距离的非线性最小化校正,提高参数精度。跨噪声水平(σ = 1 ~ 11)的实验验证表明,ADREF方法优于传统方法。与DLS、加权最小二乘(Weighted Least Squares, WLS)和迭代再加权最小二乘(iterative Reweighted Least Squares, IRLS)相比,ADREF在中心坐标、轴线长度和旋转角度上的误差最小。在高噪声条件下(σ = 11), ADREF的性能保持稳定,而其他方法的性能明显下降。多椭圆实验验证了自动识别和拟合的能力,无需预先指定目标数。ADREF提供了一种突破性的解决方案,结合了计算效率和强大的抗噪性,特别适用于高噪声工业视觉应用,医学图像分析以及需要准确性和可靠性的模式识别系统。
{"title":"Research on an adaptive robust ellipse fitting method integrating multiple weight strategies","authors":"Bo-Lin Jian ,&nbsp;Chao-Chung Peng ,&nbsp;Wen-Lin Chu","doi":"10.1016/j.image.2025.117425","DOIUrl":"10.1016/j.image.2025.117425","url":null,"abstract":"<div><div>This study aims to develop a robust ellipse fitting method that maintains computational efficiency comparable to Direct Least Squares (DLS) while achieving superior noise resistance for image processing and shape recognition applications. We propose Adaptive Dual-Robust Ellipse Fitting (ADREF), which integrates three novel contributions: (1) adaptive candidate clustering using k-nearest neighbor distances to adjust epsilon values, eliminating fixed parameter dependency automatically; (2) dual-robust weighting strategy combining Huber and Tukey Biweight functions with mixing coefficient α = 0.5, enhanced by image grayscale intensity information; and (3) nonlinear minimization correction based on geometric distance for improved parameter accuracy. Experimental validation across noise levels (σ = 1 to 11) demonstrates ADREF's superior performance over traditional methods. Compared to DLS, Weighted Least Squares (WLS), and Iteratively Reweighted Least Squares (IRLS), ADREF achieves the lowest errors in center coordinates, axis lengths, and rotation angles. Under high-noise conditions (σ = 11), ADREF maintains stable performance while other methods deteriorate significantly. Multi-ellipse experiments confirm automatic identification and fitting capabilities without pre-specifying target numbers. ADREF provides a breakthrough solution combining computational efficiency with robust noise resistance, particularly suitable for high-noise industrial vision applications, medical image analysis, and pattern recognition systems requiring accuracy and reliability.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117425"},"PeriodicalIF":2.7,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YUW-Net: A Y-shaped deep network for enhancement of Under-water images YUW-Net:用于增强水下图像的y形深度网络
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-11 DOI: 10.1016/j.image.2025.117422
Rupankar Das, Chandrajit Choudhury
One of the main challenges in underwater imaging is the poor quality caused by the underwater propagation of light. In this work, this distortion is analyzed separately along the color and illumination channels, and accordingly a novel Convolution neural network (CNN) architecture is proposed for the restoration of underwater images. The proposed method is independent of any approximated physical model or parameter-based model for underwater image formation. In the proposed approach the distortion is perceived as changes in the color and illumination distribution of the image, majorly in terms of the first and second moments. Extensive experimentation are done for evaluating the proposed network over real underwater images alone and in combination with the synthetically generated images. To prove the claims, a thorough comparison with the state-of-the-art works is presented, in qualitative as well as quantitative manner. The proposed network is found to outperform the state-of-the-art methods. For easy reproducibility of the work, on acceptance, the implementation of this work will be shared in https://github.com/ChandrajitChoudhury/YUWNET.
水下成像面临的主要挑战之一是由于光在水下传播而导致的成像质量差。在这项工作中,这种失真分别沿着颜色和照明通道进行分析,并据此提出了一种新的卷积神经网络(CNN)架构用于水下图像的恢复。该方法不依赖于任何近似物理模型或基于参数的水下图像形成模型。在提出的方法中,畸变被认为是图像的颜色和照明分布的变化,主要是在第一和第二矩方面。为了在真实的水下图像上单独评估所提出的网络,并与合成的图像相结合,进行了大量的实验。为了证明这些说法,本文以定性和定量的方式与最先进的作品进行了彻底的比较。发现所提出的网络优于最先进的方法。为了便于工作的再现,在接受后,本工作的实现将在https://github.com/ChandrajitChoudhury/YUWNET上共享。
{"title":"YUW-Net: A Y-shaped deep network for enhancement of Under-water images","authors":"Rupankar Das,&nbsp;Chandrajit Choudhury","doi":"10.1016/j.image.2025.117422","DOIUrl":"10.1016/j.image.2025.117422","url":null,"abstract":"<div><div>One of the main challenges in underwater imaging is the poor quality caused by the underwater propagation of light. In this work, this distortion is analyzed separately along the color and illumination channels, and accordingly a novel Convolution neural network (CNN) architecture is proposed for the restoration of underwater images. The proposed method is independent of any approximated physical model or parameter-based model for underwater image formation. In the proposed approach the distortion is perceived as changes in the color and illumination distribution of the image, majorly in terms of the first and second moments. Extensive experimentation are done for evaluating the proposed network over real underwater images alone and in combination with the synthetically generated images. To prove the claims, a thorough comparison with the state-of-the-art works is presented, in qualitative as well as quantitative manner. The proposed network is found to outperform the state-of-the-art methods. For easy reproducibility of the work, on acceptance, the implementation of this work will be shared in <span><span>https://github.com/ChandrajitChoudhury/YUWNET</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117422"},"PeriodicalIF":2.7,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MTDNet: A crowd counting network based on a multiscale transformer and dilated convolution MTDNet:一个基于多尺度变压器和扩展卷积的人群计数网络
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-10 DOI: 10.1016/j.image.2025.117423
Chongle Peng , Qingbing Sang , Xiaojun Wu , Zhaohong Deng , Lixiong Liu
Crowd counting methods based on Convolutional Neural Networks (CNNs) often struggle to model contextual features effectively due to their limited feature extraction scope. In contrast, the Transformer model, through its global self-attention mechanism, can consider information from all positions simultaneously, thereby modeling contextual relationships more comprehensively and enhancing the model's ability to understand sequences. Building on this, we propose a crowd counting network that leverages multi-scale features extracted by Transformer and dilated convolutions. Specifically, we utilize Vision Transformer to capture global information and integrate low-level detail features with high-level semantic features, enabling the model to effectively extract semantic features with global contextual information. Additionally, we design a multi-branch dilated convolution regression module, composed of dilated convolutions, to process feature maps generated during the encoding phase, which can effectively produce predicted density maps and yield accurate regression results. Furthermore, we introduce a novel loss function for our pipeline, which enhances the model's fitting capability and results in smoother density maps.
Extensive experiments on four public crowd counting datasets demonstrate that our method achieves state-of-the-art results on several benchmarks. Particularly on the ShanghaiTech dataset, MTDNet achieves the most advanced results, with the MSE improving by 2.6% compared to the next best model.
基于卷积神经网络(cnn)的人群计数方法由于其特征提取范围有限,往往难以有效地建模上下文特征。相比之下,Transformer模型通过其全局自关注机制,可以同时考虑来自所有位置的信息,从而更全面地建模上下文关系,并增强模型理解序列的能力。在此基础上,我们提出了一个利用Transformer和扩展卷积提取的多尺度特征的人群计数网络。具体而言,我们利用Vision Transformer捕获全局信息,并将低级细节特征与高级语义特征相结合,使模型能够有效地提取具有全局上下文信息的语义特征。此外,我们设计了一个由扩展卷积组成的多分支扩展卷积回归模块,对编码阶段生成的特征图进行处理,可以有效地生成预测的密度图,并得到准确的回归结果。此外,我们为我们的管道引入了一种新的损失函数,提高了模型的拟合能力,并得到了更平滑的密度图。在四个公共人群计数数据集上进行的大量实验表明,我们的方法在几个基准上取得了最先进的结果。特别是在ShanghaiTech数据集上,MTDNet取得了最先进的结果,与次优模型相比,MSE提高了2.6%。
{"title":"MTDNet: A crowd counting network based on a multiscale transformer and dilated convolution","authors":"Chongle Peng ,&nbsp;Qingbing Sang ,&nbsp;Xiaojun Wu ,&nbsp;Zhaohong Deng ,&nbsp;Lixiong Liu","doi":"10.1016/j.image.2025.117423","DOIUrl":"10.1016/j.image.2025.117423","url":null,"abstract":"<div><div>Crowd counting methods based on Convolutional Neural Networks (CNNs) often struggle to model contextual features effectively due to their limited feature extraction scope. In contrast, the Transformer model, through its global self-attention mechanism, can consider information from all positions simultaneously, thereby modeling contextual relationships more comprehensively and enhancing the model's ability to understand sequences. Building on this, we propose a crowd counting network that leverages multi-scale features extracted by Transformer and dilated convolutions. Specifically, we utilize Vision Transformer to capture global information and integrate low-level detail features with high-level semantic features, enabling the model to effectively extract semantic features with global contextual information. Additionally, we design a multi-branch dilated convolution regression module, composed of dilated convolutions, to process feature maps generated during the encoding phase, which can effectively produce predicted density maps and yield accurate regression results. Furthermore, we introduce a novel loss function for our pipeline, which enhances the model's fitting capability and results in smoother density maps.</div><div>Extensive experiments on four public crowd counting datasets demonstrate that our method achieves state-of-the-art results on several benchmarks. Particularly on the ShanghaiTech dataset, MTDNet achieves the most advanced results, with the MSE improving by 2.6% compared to the next best model.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117423"},"PeriodicalIF":2.7,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An LDCT image denoising model based on dual-path attention 一种基于双路径关注的LDCT图像去噪模型
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-09 DOI: 10.1016/j.image.2025.117412
Xiaodong Guo , Xiaoyan Chang , Lifang Wang , Rongguo Zhang , Lihua Hu
Existing denoising methods for low-dose computed tomography (LDCT) images generally suffer from an imbalance between global correlation capture and local detail preservation, leading to blurring of the global image structure or loss of edge textures, which seriously impairs the accuracy of clinical diagnosis. To address this limitation, a dual-path attention denoising model is proposed. An LSTM-based attention module (LBAM) is introduced: it captures long-range sequence dependencies through a gating mechanism, focuses on key diagnostic information using residual connections, and enhances the modeling of cross-regional continuous morphology in the sequence dimension. Meanwhile, a multi-scale attention module (MSAM) is embedded in the U-Net architecture, which performs parallel modeling via channel-spatial attention and self-attention to achieve collaborative optimization of local detail enhancement and global correlation capture. The results show that the method can effectively balance the integrity of the global structure and the clarity of local details, significantly improving the quality of LDCT images. The PSNR and SSIM values in the Mayo dataset were 31.6725 and 0.8820, respectively, indicating the effectiveness of the method in removing noise and preserving image details.
现有的低剂量计算机断层扫描(LDCT)图像去噪方法普遍存在全局相关性捕获与局部细节保留不平衡的问题,导致图像整体结构模糊或边缘纹理丢失,严重影响临床诊断的准确性。为了解决这一问题,提出了一种双路径注意力去噪模型。提出了一种基于lstm的注意力模块(LBAM),该模块通过门控机制捕获远程序列依赖关系,利用残差连接关注关键诊断信息,并在序列维度上增强了跨区域连续形态的建模。同时,在U-Net体系结构中嵌入多尺度注意模块(MSAM),通过通道空间注意和自注意并行建模,实现局部细节增强和全局相关性捕获的协同优化。结果表明,该方法能够有效地平衡全局结构的完整性和局部细节的清晰度,显著提高LDCT图像的质量。Mayo数据集的PSNR和SSIM值分别为31.6725和0.8820,表明该方法在去除噪声和保留图像细节方面是有效的。
{"title":"An LDCT image denoising model based on dual-path attention","authors":"Xiaodong Guo ,&nbsp;Xiaoyan Chang ,&nbsp;Lifang Wang ,&nbsp;Rongguo Zhang ,&nbsp;Lihua Hu","doi":"10.1016/j.image.2025.117412","DOIUrl":"10.1016/j.image.2025.117412","url":null,"abstract":"<div><div>Existing denoising methods for low-dose computed tomography (LDCT) images generally suffer from an imbalance between global correlation capture and local detail preservation, leading to blurring of the global image structure or loss of edge textures, which seriously impairs the accuracy of clinical diagnosis. To address this limitation, a dual-path attention denoising model is proposed. An LSTM-based attention module (LBAM) is introduced: it captures long-range sequence dependencies through a gating mechanism, focuses on key diagnostic information using residual connections, and enhances the modeling of cross-regional continuous morphology in the sequence dimension. Meanwhile, a multi-scale attention module (MSAM) is embedded in the U-Net architecture, which performs parallel modeling via channel-spatial attention and self-attention to achieve collaborative optimization of local detail enhancement and global correlation capture. The results show that the method can effectively balance the integrity of the global structure and the clarity of local details, significantly improving the quality of LDCT images. The PSNR and SSIM values in the Mayo dataset were 31.6725 and 0.8820, respectively, indicating the effectiveness of the method in removing noise and preserving image details.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117412"},"PeriodicalIF":2.7,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new framework for realizing fraction-order filters with robust performance 一种实现分数阶滤波器鲁棒性的新框架
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-06 DOI: 10.1016/j.image.2025.117411
Yiguang Liu
In signal or image fields, how to design the filters not only revealing the high frequency information (such as image edges) but also subduing noise influence concurrently, is very important. To construct the filters having this performance, a new framework is given to realize filters with fractional orders. From the framework, a filter with m+1 entries, actually performs (mα)-order difference with α a complex number. When the absolute value of the real part of α is larger than a threshold, some entry of the filter is almost 2-periodic variable, and the others are close to zero. This sheds light on the fact, why a finite long filter cannot perform filtering with too high orders. By virtue of heuristic ways, an applicable restriction for the real part of α is given, [1,1]. Experimental results on synthetic data sequences indicate that the newly defined filters can disclose the local extreme points robustly while the Laplacian of Gaussian (LoG) operator completely fails, and the results on image benchmarks demonstrate that, the newly given filters can filter out more useful information details such as detecting edges while subduing noises, than the conventional integer-order difference filters such as LoG and Laplacian (Lap) operator. It is hopeful that more effective and practical filters can be constructed by the new framework.
在信号或图像领域,如何设计既能显示高频信息(如图像边缘)又能抑制噪声影响的滤波器是非常重要的。为了构造具有这种性能的滤波器,给出了一种新的实现分数阶滤波器的框架。从框架来看,一个有m+1个条目的过滤器,实际上执行(mα)与复数α的阶差。当α实部的绝对值大于阈值时,滤波器的某些项几乎是2周期变量,而其他项则接近于零。这揭示了为什么有限长滤波器不能执行高阶滤波的事实。利用启发式方法,给出了α实部的一个适用约束[−1,1]。在合成数据序列上的实验结果表明,在LoG算子完全失效的情况下,新定义的滤波器可以鲁棒地揭示局部极值点;图像基准测试结果表明,与LoG和Lap算子等常规整数阶差分滤波器相比,新定义的滤波器可以在抑制噪声的同时过滤出更多有用的信息细节,如边缘检测。希望通过新的框架可以构建更有效和实用的过滤器。
{"title":"A new framework for realizing fraction-order filters with robust performance","authors":"Yiguang Liu","doi":"10.1016/j.image.2025.117411","DOIUrl":"10.1016/j.image.2025.117411","url":null,"abstract":"<div><div>In signal or image fields, how to design the filters not only revealing the high frequency information (such as image edges) but also subduing noise influence concurrently, is very important. To construct the filters having this performance, a new framework is given to realize filters with fractional orders. From the framework, a filter with <span><math><mrow><mi>m</mi><mo>+</mo><mn>1</mn></mrow></math></span> entries, actually performs <span><math><mrow><mo>(</mo><mi>m</mi><mi>α</mi><mo>)</mo></mrow></math></span>-order difference with <span><math><mi>α</mi></math></span> a complex number. When the absolute value of the real part of <span><math><mi>α</mi></math></span> is larger than a threshold, some entry of the filter is almost 2-periodic variable, and the others are close to zero. This sheds light on the fact, why a finite long filter cannot perform filtering with too high orders. By virtue of heuristic ways, an applicable restriction for the real part of <span><math><mi>α</mi></math></span> is given, <span><math><mrow><mo>[</mo><mo>−</mo><mn>1</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></math></span>. Experimental results on synthetic data sequences indicate that the newly defined filters can disclose the local extreme points robustly while the Laplacian of Gaussian (LoG) operator completely fails, and the results on image benchmarks demonstrate that, the newly given filters can filter out more useful information details such as detecting edges while subduing noises, than the conventional integer-order difference filters such as LoG and Laplacian (Lap) operator. It is hopeful that more effective and practical filters can be constructed by the new framework.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117411"},"PeriodicalIF":2.7,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145321927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAPS-MER: Facial action position and semantic based interactive fusion for micro-expression recognition 基于面部动作位置和语义的微表情识别交互融合
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-29 DOI: 10.1016/j.image.2025.117410
Gang Yan , Yubo He , Zian Liu , Yuqiang Guo , Shixin Cen
Micro-expression (ME) is a spontaneous facial motion that plays a crucial role in revealing the true human emotions. However, there is a complex correlation between the local motion position of ME and the semantics of subtle actions, which poses a great difficulty in establishing a robust dependency to achieve micro-expression recognition (MER). In this work, we propose a facial action position and semantic interaction fusion micro-expression recognition network (FAPS-MER), which consists of a dual-branch backbone network, i.e., the upper branch consists of a Position Awareness Module (PAM), while the lower branch consists of a Semantic Extraction Module (SEM), and a feature interaction strategy. Specifically, PAM uses position awareness attention to extract the ME’s motion position information while fusing semantic action information from the lower branch to capture global motion dependencies. SEM extracts facial action semantics while using action unit (AU) prediction as an auxiliary task, helping to establish a correlation between muscle motion and action semantics. Then, a hierarchical correlation inference strategy is proposed, which fuses position awareness attention across different levels within the SEM, thereby ensuring a continuous focus on the semantic information of actions at crucial facial positions. Finally, an interaction strategy leveraging complementary features is introduced to hierarchically fuse dual-branch features, further advancing the learning of contextual relationships between facial motion positions and action semantics. Through extensive experiments and evaluations on the benchmark datasets CASMEII, SAMM and MMEW, the results demonstrate that we are competitive with state-of-the-art methods.
微表情(ME)是一种自发的面部运动,在揭示人类真实情绪方面起着至关重要的作用。然而,微表情局部运动位置与细微动作语义之间存在复杂的相关性,这给建立鲁棒依赖关系实现微表情识别(MER)带来了很大困难。在这项工作中,我们提出了一个面部动作位置和语义交互融合的微表情识别网络(FAPS-MER),该网络由一个双分支骨干网络组成,即上分支由一个位置感知模块(PAM)组成,下分支由一个语义提取模块(SEM)组成,以及一个特征交互策略。具体来说,PAM使用位置感知注意提取ME的运动位置信息,同时融合来自较低分支的语义动作信息以捕获全局运动依赖关系。扫描电镜提取面部动作语义,同时使用动作单元预测作为辅助任务,有助于建立肌肉运动与动作语义之间的相关性。然后,提出了一种分层关联推理策略,该策略融合了扫描电镜中不同层次的位置意识注意力,从而确保了对面部关键位置动作语义信息的持续关注。最后,引入了一种利用互补特征的交互策略,分层融合双分支特征,进一步推进面部运动位置和动作语义之间的上下文关系学习。通过对基准数据集CASMEII、SAMM和MMEW的大量实验和评估,结果表明我们的方法与最先进的方法具有竞争力。
{"title":"FAPS-MER: Facial action position and semantic based interactive fusion for micro-expression recognition","authors":"Gang Yan ,&nbsp;Yubo He ,&nbsp;Zian Liu ,&nbsp;Yuqiang Guo ,&nbsp;Shixin Cen","doi":"10.1016/j.image.2025.117410","DOIUrl":"10.1016/j.image.2025.117410","url":null,"abstract":"<div><div>Micro-expression (ME) is a spontaneous facial motion that plays a crucial role in revealing the true human emotions. However, there is a complex correlation between the local motion position of ME and the semantics of subtle actions, which poses a great difficulty in establishing a robust dependency to achieve micro-expression recognition (MER). In this work, we propose a facial action position and semantic interaction fusion micro-expression recognition network (FAPS-MER), which consists of a dual-branch backbone network, i.e., the upper branch consists of a Position Awareness Module (PAM), while the lower branch consists of a Semantic Extraction Module (SEM), and a feature interaction strategy. Specifically, PAM uses position awareness attention to extract the ME’s motion position information while fusing semantic action information from the lower branch to capture global motion dependencies. SEM extracts facial action semantics while using action unit (AU) prediction as an auxiliary task, helping to establish a correlation between muscle motion and action semantics. Then, a hierarchical correlation inference strategy is proposed, which fuses position awareness attention across different levels within the SEM, thereby ensuring a continuous focus on the semantic information of actions at crucial facial positions. Finally, an interaction strategy leveraging complementary features is introduced to hierarchically fuse dual-branch features, further advancing the learning of contextual relationships between facial motion positions and action semantics. Through extensive experiments and evaluations on the benchmark datasets CASMEII, SAMM and MMEW, the results demonstrate that we are competitive with state-of-the-art methods.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117410"},"PeriodicalIF":2.7,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145270193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-exposure image enhancement and YOLO integration for nighttime pedestrian detection 夜间行人检测的多曝光图像增强和YOLO集成
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-26 DOI: 10.1016/j.image.2025.117421
Xiaobiao Dai , Junbo Lan , Zhigang Chen , Botao Wang , Xue Wen
This paper presents DCExYOLO, a novel method integrating multi-exposure image enhancement with YOLO object detection for real-time pedestrian detection in nighttime driving scenarios. To address the challenges of uneven illumination and low-light conditions in nighttime driving scenarios, we introduce an improved Zero-DCE++ algorithm to generate enhanced images at multiple exposure levels, which are then combined with the original image as input to the YOLO detector. The method significantly enhances the synergistic effect between image enhancement and object detection through the design of multi-task loss functions and a two-stage optimization strategy. Extensive experiments on multiple datasets demonstrate that DCExYOLO achieves an optimal balance between detection performance and efficiency, significantly reducing the log-average miss rate (MR-2) compared to the YOLO baseline. Therefore, this research validates the potential of multi-exposure enhancement in object detection under complex illumination environments, providing an efficient and reliable solution for intelligent driving and traffic safety, while establishing a foundation for future optimization of detection technologies in complex scenarios.
本文提出了一种将多曝光图像增强与YOLO目标检测相结合的夜间行车场景行人实时检测新方法DCExYOLO。为了解决夜间驾驶场景中光照不均匀和光线不足的问题,我们引入了一种改进的zero - dce++算法,以在多个曝光水平下生成增强图像,然后将其与原始图像相结合,作为YOLO检测器的输入。该方法通过设计多任务损失函数和两阶段优化策略,显著增强了图像增强与目标检测之间的协同效应。在多个数据集上的大量实验表明,DCExYOLO实现了检测性能和效率之间的最佳平衡,与YOLO基线相比,显著降低了日志平均缺失率(MR-2)。因此,本研究验证了多曝光增强在复杂照明环境下的目标检测潜力,为智能驾驶和交通安全提供了高效可靠的解决方案,同时也为未来复杂场景下检测技术的优化奠定了基础。
{"title":"Multi-exposure image enhancement and YOLO integration for nighttime pedestrian detection","authors":"Xiaobiao Dai ,&nbsp;Junbo Lan ,&nbsp;Zhigang Chen ,&nbsp;Botao Wang ,&nbsp;Xue Wen","doi":"10.1016/j.image.2025.117421","DOIUrl":"10.1016/j.image.2025.117421","url":null,"abstract":"<div><div>This paper presents DCE<em>x</em>YOLO, a novel method integrating multi-exposure image enhancement with YOLO object detection for real-time pedestrian detection in nighttime driving scenarios. To address the challenges of uneven illumination and low-light conditions in nighttime driving scenarios, we introduce an improved Zero-DCE++ algorithm to generate enhanced images at multiple exposure levels, which are then combined with the original image as input to the YOLO detector. The method significantly enhances the synergistic effect between image enhancement and object detection through the design of multi-task loss functions and a two-stage optimization strategy. Extensive experiments on multiple datasets demonstrate that DCE<em>x</em>YOLO achieves an optimal balance between detection performance and efficiency, significantly reducing the log-average miss rate (MR<sup>-2</sup>) compared to the YOLO baseline. Therefore, this research validates the potential of multi-exposure enhancement in object detection under complex illumination environments, providing an efficient and reliable solution for intelligent driving and traffic safety, while establishing a foundation for future optimization of detection technologies in complex scenarios.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117421"},"PeriodicalIF":2.7,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145270194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global and local collaborative learning for no-reference omnidirectional image quality assessment 基于全局和局部协同学习的无参考全方位图像质量评估
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-26 DOI: 10.1016/j.image.2025.117409
Deyang Liu , Lifei Wan , Xiaolin Zhang , Xiaofei Zhou , Caifeng Shan
Omnidirectional image (OI) has achieved tremendous success in virtual reality applications. With the continuous increase in network bandwidth, users can access massive OIs from the internet. It is crucial to evaluate the visual quality of distorted OIs to ensure a high-quality immersive experience for users. For most existing viewport based OI quality assessment(OIQA) methods, the inconsistent distortions in each viewport are always overlooked. Moreover, the loss of texture details brought by viewport downsampling procedure also limits the assessment performance. In order to address these challenges, this paper proposes a global-and-local collaborative learning method for no-reference OIQA. We adopt a dual-level learning architecture to collaboratively explore the non-uniform distortions and learn a sparse representation of each projected viewport. Specifically, we extract the hierarchical features from each viewport to align with the hierarchical perceptual progress of the human visual system (HVS). By aggregating with a Transformer encoder, the inconsistent spatial features in each viewport can be globally mined. To preserve more texture details during viewport downsampling process, we introduce a learnable patch selection paradigm. By learning the position preferences of local texture variations in each viewport, our method can derive a set of sparse image patches to sparsely represent the downsampled viewport. Comprehensive experiments illustrate the superiority of the proposed method on three publicly available databases. The code is available at https://github.com/ldyorchid/GLCNet-OIQA.
全向图像技术在虚拟现实应用中取得了巨大的成功。随着网络带宽的不断增加,用户可以从互联网上访问海量的io。为了确保用户获得高质量的沉浸式体验,评估扭曲的视觉质量至关重要。对于大多数现有的基于视口的OI质量评估(OIQA)方法,每个视口中的不一致扭曲总是被忽略。此外,视口下采样带来的纹理细节丢失也限制了评估性能。为了解决这些问题,本文提出了一种无参考OIQA的全局-局部协同学习方法。我们采用双层学习架构来协作探索非均匀扭曲,并学习每个投影视口的稀疏表示。具体来说,我们从每个视口提取分层特征,以与人类视觉系统(HVS)的分层感知过程保持一致。通过与Transformer编码器的聚合,可以对每个视口中不一致的空间特征进行全局挖掘。为了在视口下采样过程中保留更多的纹理细节,我们引入了一个可学习的补丁选择范例。通过学习每个视口中局部纹理变化的位置偏好,我们的方法可以得到一组稀疏的图像补丁来稀疏地表示下采样的视口。综合实验证明了该方法在三个公开数据库上的优越性。代码可在https://github.com/ldyorchid/GLCNet-OIQA上获得。
{"title":"Global and local collaborative learning for no-reference omnidirectional image quality assessment","authors":"Deyang Liu ,&nbsp;Lifei Wan ,&nbsp;Xiaolin Zhang ,&nbsp;Xiaofei Zhou ,&nbsp;Caifeng Shan","doi":"10.1016/j.image.2025.117409","DOIUrl":"10.1016/j.image.2025.117409","url":null,"abstract":"<div><div>Omnidirectional image (OI) has achieved tremendous success in virtual reality applications. With the continuous increase in network bandwidth, users can access massive OIs from the internet. It is crucial to evaluate the visual quality of distorted OIs to ensure a high-quality immersive experience for users. For most existing viewport based OI quality assessment(OIQA) methods, the inconsistent distortions in each viewport are always overlooked. Moreover, the loss of texture details brought by viewport downsampling procedure also limits the assessment performance. In order to address these challenges, this paper proposes a global-and-local collaborative learning method for no-reference OIQA. We adopt a dual-level learning architecture to collaboratively explore the non-uniform distortions and learn a sparse representation of each projected viewport. Specifically, we extract the hierarchical features from each viewport to align with the hierarchical perceptual progress of the human visual system (HVS). By aggregating with a Transformer encoder, the inconsistent spatial features in each viewport can be globally mined. To preserve more texture details during viewport downsampling process, we introduce a learnable patch selection paradigm. By learning the position preferences of local texture variations in each viewport, our method can derive a set of sparse image patches to sparsely represent the downsampled viewport. Comprehensive experiments illustrate the superiority of the proposed method on three publicly available databases. The code is available at <span><span>https://github.com/ldyorchid/GLCNet-OIQA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117409"},"PeriodicalIF":2.7,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video and text semantic center alignment for text-video cross-modal retrieval 文本-视频跨模态检索的视频和文本语义中心对齐
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-09-25 DOI: 10.1016/j.image.2025.117413
Ming Jin , Huaxiang Zhang , Lei Zhu , Jiande Sun , Li Liu
With the proliferation of video on the Internet, users demand higher precision and efficiency of retrieval technology. The current cross-modal retrieval technology mainly has the following problems: firstly, there is no effective alignment of the same semantic objects between video and text. Secondly, the existing neural networks destroy the spatial features of the video when establishing the temporal features of the video. Finally, the extraction and processing of the text’s local features are too complex, which increases the network complexity. To address the existing problems, we proposed a text-video semantic center alignment network. First, a semantic center alignment module was constructed to promote the alignment of semantic features of the same object across different modalities. Second, a pre-trained BERT based on a residual structure was designed to protect spatial information when inferring temporal information. Finally, the “jieba” library was employed to extract the local key information of the text, thereby simplifying the complexity of local feature extraction. The effectiveness of the network structure was evaluated on the MSVD, MSR-VTT, and DiDeMo datasets.
随着互联网上视频的激增,用户对检索技术的精度和效率提出了更高的要求。目前的跨模态检索技术主要存在以下问题:首先,视频和文本之间没有对相同的语义对象进行有效的对齐。其次,现有的神经网络在建立视频的时间特征时破坏了视频的空间特征。最后,文本局部特征的提取和处理过于复杂,增加了网络的复杂度。针对存在的问题,我们提出了一种文本-视频语义中心对齐网络。首先,构建语义中心对齐模块,促进同一对象在不同模态上的语义特征对齐;其次,设计了基于残差结构的预训练BERT,在推断时间信息时保护空间信息;最后,利用“jieba”库提取文本的局部关键信息,从而简化了局部特征提取的复杂度。在MSVD、MSR-VTT和DiDeMo数据集上评估了网络结构的有效性。
{"title":"Video and text semantic center alignment for text-video cross-modal retrieval","authors":"Ming Jin ,&nbsp;Huaxiang Zhang ,&nbsp;Lei Zhu ,&nbsp;Jiande Sun ,&nbsp;Li Liu","doi":"10.1016/j.image.2025.117413","DOIUrl":"10.1016/j.image.2025.117413","url":null,"abstract":"<div><div>With the proliferation of video on the Internet, users demand higher precision and efficiency of retrieval technology. The current cross-modal retrieval technology mainly has the following problems: firstly, there is no effective alignment of the same semantic objects between video and text. Secondly, the existing neural networks destroy the spatial features of the video when establishing the temporal features of the video. Finally, the extraction and processing of the text’s local features are too complex, which increases the network complexity. To address the existing problems, we proposed a text-video semantic center alignment network. First, a semantic center alignment module was constructed to promote the alignment of semantic features of the same object across different modalities. Second, a pre-trained BERT based on a residual structure was designed to protect spatial information when inferring temporal information. Finally, the “jieba” library was employed to extract the local key information of the text, thereby simplifying the complexity of local feature extraction. The effectiveness of the network structure was evaluated on the MSVD, MSR-VTT, and DiDeMo datasets.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"140 ","pages":"Article 117413"},"PeriodicalIF":2.7,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145222573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1