首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
Few-Shot Fine-Grained Classification With Foreground-Aware Kernelized Feature Reconstruction Network 基于前景感知的核特征重构网络的少镜头细粒度分类。
IF 13.7 Pub Date : 2025-12-30 DOI: 10.1109/TIP.2025.3646940
Yangfan Li;Wei Li
Feature reconstruction networks have achieved remarkable performance in few-shot fine-grained classification tasks. Nonetheless, traditional feature reconstruction networks rely on linear regression. This linearity may cause the loss of subtle discriminative cues, ultimately resulting in less precise reconstructed features. Moreover, in situations where the background predominantly occupies the image, the background reconstruction errors tend to overshadow foreground reconstruction errors, resulting in inaccurate reconstruction errors. In order to address the two key issues, a novel approach called the Foreground-Aware Kernelized Feature Reconstruction Network (FKFRN) is proposed. Specifically, to address the problem of imprecise reconstructed features, we introduce kernel methods into linear feature reconstruction, extending it to nonlinear feature reconstruction, thus enabling the reconstruction of richer, finer-grained discriminative features. To tackle the issue of inaccurate reconstruction errors, the foreground-aware reconstruction error is proposed. Specifically, the model assigns higher weights to features containing more foreground information and lower weights to those dominated by background content, which reduces the impact of background errors on the overall reconstruction. To estimate these weights accurately, we design two complementary strategies: an explicit probabilistic graphical model and an implicit neural network–based approach. Extensive experimental results on eight datasets validate the effectiveness of the proposed approach for few-shot fine-grained classification.
特征重构网络在小样本细粒度分类任务中取得了显著的性能。然而,传统的特征重构网络依赖于线性回归。这种线性可能会导致微妙的判别线索的丢失,最终导致不太精确的重建特征。此外,在背景占据图像主体的情况下,背景重建误差往往会掩盖前景重建误差,导致重建误差不准确。为了解决这两个关键问题,提出了一种称为前景感知核特征重构网络(FKFRN)的新方法。具体来说,为了解决重构特征不精确的问题,我们将核方法引入到线性特征重构中,并将其扩展到非线性特征重构中,从而能够重构更丰富、更细粒度的判别特征。为了解决重建误差不准确的问题,提出了前景感知重建误差。具体而言,该模型对前景信息较多的特征赋予较高的权重,对背景内容占主导地位的特征赋予较低的权重,从而降低了背景误差对整体重建的影响。为了准确地估计这些权重,我们设计了两种互补策略:显式概率图模型和隐式基于神经网络的方法。在8个数据集上的大量实验结果验证了该方法对小样本细粒度分类的有效性。
{"title":"Few-Shot Fine-Grained Classification With Foreground-Aware Kernelized Feature Reconstruction Network","authors":"Yangfan Li;Wei Li","doi":"10.1109/TIP.2025.3646940","DOIUrl":"10.1109/TIP.2025.3646940","url":null,"abstract":"Feature reconstruction networks have achieved remarkable performance in few-shot fine-grained classification tasks. Nonetheless, traditional feature reconstruction networks rely on linear regression. This linearity may cause the loss of subtle discriminative cues, ultimately resulting in less precise reconstructed features. Moreover, in situations where the background predominantly occupies the image, the background reconstruction errors tend to overshadow foreground reconstruction errors, resulting in inaccurate reconstruction errors. In order to address the two key issues, a novel approach called the Foreground-Aware Kernelized Feature Reconstruction Network (FKFRN) is proposed. Specifically, to address the problem of imprecise reconstructed features, we introduce kernel methods into linear feature reconstruction, extending it to nonlinear feature reconstruction, thus enabling the reconstruction of richer, finer-grained discriminative features. To tackle the issue of inaccurate reconstruction errors, the foreground-aware reconstruction error is proposed. Specifically, the model assigns higher weights to features containing more foreground information and lower weights to those dominated by background content, which reduces the impact of background errors on the overall reconstruction. To estimate these weights accurately, we design two complementary strategies: an explicit probabilistic graphical model and an implicit neural network–based approach. Extensive experimental results on eight datasets validate the effectiveness of the proposed approach for few-shot fine-grained classification.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"150-165"},"PeriodicalIF":13.7,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LNet: Lightweight Network for Driver Attention Estimation via Scene and Gaze Consistency LNet:基于场景和凝视一致性的驾驶员注意力估计轻量级网络。
IF 13.7 Pub Date : 2025-12-30 DOI: 10.1109/TIP.2025.3646893
Daosong Hu;Xi Li;Mingyue Cui;Kai Huang
In resource-constrained vehicle systems, establishing consistency between multi-view scenes and driver gaze remains challenging. Prior methods mainly focus on cross-source data fusion, estimating gaze or attention maps through unidirectional implicit links between scene and facial features. Although bidirectional projection can correct misalignment between predictions and ground truth, the high resolution of scene images and complex semantic extraction incur heavy computational loads. To address these issues, we propose a lightweight driver-attention estimation framework that leverages geometric consistency between scene and gaze to guide feature extraction bidirectionally, thereby strengthening representation. Specifically, we first introduce a lightweight feature extraction module that captures global and local information in parallel through dual asymmetric branches to efficiently extract facial and scene features. An information cross fusion module is then designed to promote interaction between the scene and gaze streams. The multi-branch architecture extracts gaze and geometric cues at multiple scales, reducing the computational redundancy caused by mixed features when modeling geometric consistency across both views. Experiments on a large public dataset show that incorporating scene information introduces no significant computational overhead and yields a better trade-off between accuracy and efficiency. Moreover, leveraging bidirectional projection and the temporal continuity of gaze, we preliminarily explore the framework’s potential for predicting attention trends.
在资源受限的车辆系统中,建立多视角场景和驾驶员视线之间的一致性仍然是一个挑战。先前的方法主要集中在跨源数据融合,通过场景和面部特征之间的单向隐式联系来估计凝视或注意映射。虽然双向投影可以纠正预测结果与真实情况之间的不一致,但场景图像的高分辨率和复杂的语义提取会带来沉重的计算负担。为了解决这些问题,我们提出了一个轻量级的驾驶员-注意力估计框架,该框架利用场景和凝视之间的几何一致性来双向指导特征提取,从而加强表征。具体来说,我们首先引入了一个轻量级的特征提取模块,该模块通过双非对称分支并行捕获全局和局部信息,以有效地提取人脸和场景特征。然后设计了一个信息交叉融合模块,以促进场景和凝视流之间的交互。多分支架构在多个尺度上提取凝视和几何线索,减少了在两个视图之间建模几何一致性时混合特征造成的计算冗余。在大型公共数据集上的实验表明,结合场景信息不会带来显著的计算开销,并且在准确性和效率之间取得了更好的平衡。此外,利用双向投射和凝视的时间连续性,我们初步探索了该框架预测注意力趋势的潜力。
{"title":"LNet: Lightweight Network for Driver Attention Estimation via Scene and Gaze Consistency","authors":"Daosong Hu;Xi Li;Mingyue Cui;Kai Huang","doi":"10.1109/TIP.2025.3646893","DOIUrl":"10.1109/TIP.2025.3646893","url":null,"abstract":"In resource-constrained vehicle systems, establishing consistency between multi-view scenes and driver gaze remains challenging. Prior methods mainly focus on cross-source data fusion, estimating gaze or attention maps through unidirectional implicit links between scene and facial features. Although bidirectional projection can correct misalignment between predictions and ground truth, the high resolution of scene images and complex semantic extraction incur heavy computational loads. To address these issues, we propose a lightweight driver-attention estimation framework that leverages geometric consistency between scene and gaze to guide feature extraction bidirectionally, thereby strengthening representation. Specifically, we first introduce a lightweight feature extraction module that captures global and local information in parallel through dual asymmetric branches to efficiently extract facial and scene features. An information cross fusion module is then designed to promote interaction between the scene and gaze streams. The multi-branch architecture extracts gaze and geometric cues at multiple scales, reducing the computational redundancy caused by mixed features when modeling geometric consistency across both views. Experiments on a large public dataset show that incorporating scene information introduces no significant computational overhead and yields a better trade-off between accuracy and efficiency. Moreover, leveraging bidirectional projection and the temporal continuity of gaze, we preliminarily explore the framework’s potential for predicting attention trends.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"27-41"},"PeriodicalIF":13.7,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embracing the Power of Known Class Bias in Open Set Recognition From a Reconstruction Perspective 从重构的角度拥抱开放集识别中已知类偏差的力量。
IF 13.7 Pub Date : 2025-12-26 DOI: 10.1109/TIP.2025.3644791
Heyang Sun;Chuanxing Geng;Songcan Chen
The open set known class bias is conventionally viewed as a fatal problem i.e., the models trained solely on known classes tend to fit unknown classes to known classes with high confidence in inference. Thus existing methods, without exception make a choice in two manners: most methods opt for eliminating the known class bias as much as possible with tireless efforts, while others circumvent the known class bias by employing a reconstruction method. However, in this paper, we challenge the two widely accepted approaches and present a novel proposition: the so-called harmful known class bias for most methods is, exactly conversely, beneficial for the reconstruction-based method and thus such known class bias can serve as a positive-incentive to the Open set recognition (OSR) models from a reconstruction perspective. Along this line, we propose the Bias Enhanced Reconstruction Learning (BERL) framework to enhance the known class bias respectively from the class level, model level and sample level. Specifically, at the class level, a specific representation is constructed in a supervised contrastive manner to avoid overgeneralization, while a diffusion model is employed by injecting the class prior to guide the biased reconstruction at the model level. Additionally, we leverage the advantages of the diffusion model to design a self-adaptive strategy, enabling effective sample-level biased sampling based on the information bottleneck theory. Experiments on various benchmarks demonstrate the effectiveness and performance superiority of the proposed method.
开放集已知类偏差通常被认为是一个致命的问题,即仅在已知类上训练的模型倾向于以高置信度将未知类拟合到已知类。因此,现有的方法无一例外地以两种方式进行选择:大多数方法通过不懈的努力选择尽可能地消除已知的类偏差,而另一些方法则采用重构方法规避已知的类偏差。然而,在本文中,我们挑战了这两种被广泛接受的方法,并提出了一个新的命题:所谓有害的已知类偏差对大多数方法来说,恰恰相反,对基于重建的方法是有益的,因此,从重建的角度来看,这种已知类偏差可以作为开放集识别(OSR)模型的积极激励。在此基础上,我们提出了偏差增强重建学习(BERL)框架,分别从类水平、模型水平和样本水平增强已知的类偏差。具体而言,在类层面,以监督对比的方式构建特定的表示,以避免过度泛化;在模型层面,采用扩散模型,通过注入类先验来指导有偏重构。此外,我们利用扩散模型的优势设计了一种自适应策略,实现了基于信息瓶颈理论的有效样本水平偏差抽样。各种基准的实验证明了该方法的有效性和性能优越性。
{"title":"Embracing the Power of Known Class Bias in Open Set Recognition From a Reconstruction Perspective","authors":"Heyang Sun;Chuanxing Geng;Songcan Chen","doi":"10.1109/TIP.2025.3644791","DOIUrl":"10.1109/TIP.2025.3644791","url":null,"abstract":"The open set known class bias is conventionally viewed as a fatal problem i.e., the models trained solely on known classes tend to fit unknown classes to known classes with high confidence in inference. Thus existing methods, without exception make a choice in two manners: most methods opt for eliminating the known class bias as much as possible with tireless efforts, while others circumvent the known class bias by employing a reconstruction method. However, in this paper, we challenge the two widely accepted approaches and present a novel proposition: the so-called harmful known class bias for most methods is, exactly conversely, beneficial for the reconstruction-based method and thus such known class bias can serve as a positive-incentive to the Open set recognition (OSR) models from a reconstruction perspective. Along this line, we propose the Bias Enhanced Reconstruction Learning (BERL) framework to enhance the known class bias respectively from the class level, model level and sample level. Specifically, at the class level, a specific representation is constructed in a supervised contrastive manner to avoid overgeneralization, while a diffusion model is employed by injecting the class prior to guide the biased reconstruction at the model level. Additionally, we leverage the advantages of the diffusion model to design a self-adaptive strategy, enabling effective sample-level biased sampling based on the information bottleneck theory. Experiments on various benchmarks demonstrate the effectiveness and performance superiority of the proposed method.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"14-26"},"PeriodicalIF":13.7,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AGAFNet: Adaptive Gated Attention Fusion Network for Accurate Nuclei Segmentation and Classification in Histology Images AGAFNet:用于组织图像核精确分割和分类的自适应门控注意融合网络
IF 13.7 Pub Date : 2025-12-25 DOI: 10.1109/TIP.2025.3646471
Nyi Nyi Naing;Huazhen Chen;Qing Cai;Lili Xia;Zhongke Gao;Jianpeng An
Nuclei segmentation and classification in Hematoxylin and Eosin (H&E) stained histology images play a vital role in cancer diagnosis, treatment planning, and research. However, accurate segmentation can be hindered by factors like irregular cell shapes, unclear boundaries, and class imbalance. To address these challenges, we propose the Adaptive Gated Attention Fusion Network (AGAFNet), which integrates three innovative attention-based blocks into a U-shaped architecture complemented by dedicated decoders for both segmentation and classification tasks. These blocks comprise the Channel-wise and Spatial Attention Integration Block (CSAIB) for enhanced feature representation and selective focus on informative regions; the Adaptive Gated Convolutional Block (AGCB) for robust feature selection throughout the network; and the Fusion Attention Refinement Block (FARB) for effective information fusion. AGAFNet leverages these elements to provide a robust solution for precise nuclei segmentation and classification in H&E stained histology images. We evaluate the performance of AGAFNet on three large-scale multi-tissue datasets: PanNuke, CoNSeP, and Lizard. The experimental results demonstrate our proposed AGAFNet achieves comparable performance to state-of-the-art methods.
苏木精和伊红(H&E)染色组织学图像的细胞核分割和分类在癌症的诊断、治疗计划和研究中起着至关重要的作用。然而,准确的分割可能会受到诸如不规则的细胞形状、不清楚的边界和类不平衡等因素的阻碍。为了解决这些挑战,我们提出了自适应门控注意力融合网络(AGAFNet),它将三个创新的基于注意力的块集成到一个u形架构中,并辅以用于分割和分类任务的专用解码器。这些块包括通道和空间注意力集成块(CSAIB),用于增强特征表示和对信息区域的选择性关注;自适应门控卷积块(AGCB)用于整个网络的鲁棒特征选择;以及用于有效信息融合的融合注意细化块(FARB)。AGAFNet利用这些元素为H&E染色组织学图像的精确细胞核分割和分类提供了强大的解决方案。我们评估了AGAFNet在三个大规模多组织数据集上的性能:PanNuke、CoNSeP和Lizard。实验结果表明,我们提出的AGAFNet达到了与最先进的方法相当的性能。
{"title":"AGAFNet: Adaptive Gated Attention Fusion Network for Accurate Nuclei Segmentation and Classification in Histology Images","authors":"Nyi Nyi Naing;Huazhen Chen;Qing Cai;Lili Xia;Zhongke Gao;Jianpeng An","doi":"10.1109/TIP.2025.3646471","DOIUrl":"10.1109/TIP.2025.3646471","url":null,"abstract":"Nuclei segmentation and classification in Hematoxylin and Eosin (H&E) stained histology images play a vital role in cancer diagnosis, treatment planning, and research. However, accurate segmentation can be hindered by factors like irregular cell shapes, unclear boundaries, and class imbalance. To address these challenges, we propose the Adaptive Gated Attention Fusion Network (AGAFNet), which integrates three innovative attention-based blocks into a U-shaped architecture complemented by dedicated decoders for both segmentation and classification tasks. These blocks comprise the Channel-wise and Spatial Attention Integration Block (CSAIB) for enhanced feature representation and selective focus on informative regions; the Adaptive Gated Convolutional Block (AGCB) for robust feature selection throughout the network; and the Fusion Attention Refinement Block (FARB) for effective information fusion. AGAFNet leverages these elements to provide a robust solution for precise nuclei segmentation and classification in H&E stained histology images. We evaluate the performance of AGAFNet on three large-scale multi-tissue datasets: PanNuke, CoNSeP, and Lizard. The experimental results demonstrate our proposed AGAFNet achieves comparable performance to state-of-the-art methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"98-111"},"PeriodicalIF":13.7,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Stage Group Interaction and Cross-Domain Fusion Network for Real-Time Smoke Segmentation 实时烟雾分割的多阶段群交互和跨域融合网络
IF 13.7 Pub Date : 2025-12-25 DOI: 10.1109/TIP.2025.3646455
Kang Li;Feiniu Yuan;Chunmei Wang;Chunli Meng
Lightweight smoke image segmentation is essential for fire warning systems, particularly on mobile devices. In recent years, although numerous high-precision, large-scale smoke segmentation models have been developed, there are few lightweight solutions specifically designed for mobile applications. Therefore, we propose a Multi-stage Group Interaction and Cross-domain Fusion Network (MGICFN) with low computational complexity for real-time smoke segmentation. To improve the model’s ability to effectively analyze smoke features, we incorporate a Cross-domain Interaction Attention Module (CIAM) to merge spatial and frequency domain features for creating a lightweight smoke encoder. To alleviate the loss of critical information from small smoke objects during downsampling, we design a Multi-stage Group Interaction Module (MGIM). The MGIM calibrates the information discrepancies between high and low-dimensional features. To enhance the boundary information of smoke targets, we introduce an Edge Enhancement Module (EEM), which utilizes predicted target boundaries as advanced guidance to refine lower-level smoke features. Furthermore, we implement a Group Convolutional Block Attention Module (GCBAM) and a Group Fusion Module (GFM) to connect the encoder and decoder efficiently. Experimental results demonstrate that MGICFN achieves an 88.70% Dice coefficient (Dice), an 81.16% mean Intersection over Union (mIoU), and a 91.93% accuracy (Acc) on the SFS3K dataset. It also achieves an 87.30% Dice, a 78.68% mIoU, and a 92.95% Acc on the SYN70K test dataset. Our MGICFN model has 0.73M parameters and requires 0.3G FLOPs.
轻量烟雾图像分割是必不可少的火灾报警系统,特别是在移动设备上。近年来,虽然已经开发了许多高精度、大规模的烟雾分割模型,但专门为移动应用设计的轻量级解决方案很少。因此,我们提出了一种计算复杂度较低的多阶段群交互和跨域融合网络(MGICFN)用于实时烟雾分割。为了提高模型有效分析烟雾特征的能力,我们结合了一个跨域交互注意模块(CIAM)来合并空间和频域特征,以创建轻量级烟雾编码器。为了减少小烟雾物体在降采样过程中关键信息的丢失,我们设计了一个多阶段组交互模块(MGIM)。MGIM对高维和低维特征之间的信息差异进行校正。为了增强烟雾目标的边界信息,我们引入了边缘增强模块(EEM),该模块利用预测的目标边界作为高级制导来细化低层烟雾特征。此外,我们还实现了组卷积块注意模块(GCBAM)和组融合模块(GFM)来有效地连接编码器和解码器。实验结果表明,MGICFN在SFS3K数据集上实现了88.70%的骰子系数(Dice), 81.16%的平均交集/联合(mIoU)和91.93%的准确率(Acc)。它还在SYN70K测试数据集上实现了87.30%的Dice, 78.68%的mIoU和92.95%的Acc。我们的MGICFN模型有0.73M参数,需要0.3G FLOPs。
{"title":"Multi-Stage Group Interaction and Cross-Domain Fusion Network for Real-Time Smoke Segmentation","authors":"Kang Li;Feiniu Yuan;Chunmei Wang;Chunli Meng","doi":"10.1109/TIP.2025.3646455","DOIUrl":"10.1109/TIP.2025.3646455","url":null,"abstract":"Lightweight smoke image segmentation is essential for fire warning systems, particularly on mobile devices. In recent years, although numerous high-precision, large-scale smoke segmentation models have been developed, there are few lightweight solutions specifically designed for mobile applications. Therefore, we propose a Multi-stage Group Interaction and Cross-domain Fusion Network (MGICFN) with low computational complexity for real-time smoke segmentation. To improve the model’s ability to effectively analyze smoke features, we incorporate a Cross-domain Interaction Attention Module (CIAM) to merge spatial and frequency domain features for creating a lightweight smoke encoder. To alleviate the loss of critical information from small smoke objects during downsampling, we design a Multi-stage Group Interaction Module (MGIM). The MGIM calibrates the information discrepancies between high and low-dimensional features. To enhance the boundary information of smoke targets, we introduce an Edge Enhancement Module (EEM), which utilizes predicted target boundaries as advanced guidance to refine lower-level smoke features. Furthermore, we implement a Group Convolutional Block Attention Module (GCBAM) and a Group Fusion Module (GFM) to connect the encoder and decoder efficiently. Experimental results demonstrate that MGICFN achieves an 88.70% Dice coefficient (Dice), an 81.16% mean Intersection over Union (mIoU), and a 91.93% accuracy (Acc) on the SFS3K dataset. It also achieves an 87.30% Dice, a 78.68% mIoU, and a 92.95% Acc on the SYN70K test dataset. Our MGICFN model has 0.73M parameters and requires 0.3G FLOPs.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"124-135"},"PeriodicalIF":13.7,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145829977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Geometry and Semantics for Camera-Based 3D Semantic Scene Completion 增强几何和语义的相机为基础的3D语义场景完成
IF 13.7 Pub Date : 2025-12-24 DOI: 10.1109/TIP.2025.3635475
Haihong Xiao;Wenxiong Kang;Yulan Guo;Hao Liu;Ying He
Giving machines the ability to infer the complete 3D geometry and semantics of complex scenes is crucial for many downstream tasks, such as decision-making and planning. Vision-centric Semantic Scene Completion (SSC) has emerged as a trendy 3D perception paradigm due to its compatibility with task properties, low cost, and rich visual cues. Despite impressive results, current approaches inevitably suffer from problems such as depth errors or depth ambiguities during the 2D-to-3D transformation process. To overcome these limitations, in this paper, we first introduce an Optical Flow-Guided (OFG) DepthNet that leverages the strengths of pretrained depth estimation models, while incorporating optical flow images to improve depth prediction accuracy in regions with significant depth changes. Then, we propose a depth ambiguity-mitigated feature lifting strategy that implements deformable cross-attention in 3D pixel space to avoid depth ambiguities caused by the projection process from 3D to 2D and further enhances the effectiveness of feature updating through the utilization of prior mask indices. Moreover, we customize two subnetworks: a residual voxel network and a sparse UNet, to enhance the network’s geometric prediction capabilities and ensure consistent semantic reasoning across varying scales. By doing so, our method achieves performance improvements over state-of-the-art methods on the SemanticKITTI, SSCBench-KITTI-360 and Occ3D-nuScene benchmarks.
赋予机器推断复杂场景的完整3D几何和语义的能力对于许多下游任务(如决策和规划)至关重要。以视觉为中心的语义场景完成(SSC)由于其与任务属性的兼容性、低成本和丰富的视觉线索而成为一种流行的3D感知范式。尽管取得了令人印象深刻的结果,但目前的方法在2d到3d转换过程中不可避免地存在深度误差或深度模糊等问题。为了克服这些限制,在本文中,我们首先引入了光流引导(OFG)深度网络,该网络利用预训练深度估计模型的优势,同时结合光流图像来提高深度变化较大区域的深度预测精度。然后,我们提出了一种减轻深度模糊的特征提升策略,该策略在三维像素空间中实现可变形的交叉关注,以避免从3D到2D的投影过程中产生的深度模糊,并通过利用先验掩模指标进一步提高特征更新的有效性。此外,我们定制了两个子网络:残差体素网络和稀疏UNet,以增强网络的几何预测能力,并确保在不同尺度上保持一致的语义推理。通过这样做,我们的方法在SemanticKITTI, sschbench - kitti -360和Occ3D-nuScene基准测试上实现了性能改进。
{"title":"Enhanced Geometry and Semantics for Camera-Based 3D Semantic Scene Completion","authors":"Haihong Xiao;Wenxiong Kang;Yulan Guo;Hao Liu;Ying He","doi":"10.1109/TIP.2025.3635475","DOIUrl":"10.1109/TIP.2025.3635475","url":null,"abstract":"Giving machines the ability to infer the complete 3D geometry and semantics of complex scenes is crucial for many downstream tasks, such as decision-making and planning. Vision-centric Semantic Scene Completion (SSC) has emerged as a trendy 3D perception paradigm due to its compatibility with task properties, low cost, and rich visual cues. Despite impressive results, current approaches inevitably suffer from problems such as depth errors or depth ambiguities during the 2D-to-3D transformation process. To overcome these limitations, in this paper, we first introduce an Optical Flow-Guided (OFG) DepthNet that leverages the strengths of pretrained depth estimation models, while incorporating optical flow images to improve depth prediction accuracy in regions with significant depth changes. Then, we propose a depth ambiguity-mitigated feature lifting strategy that implements deformable cross-attention in 3D pixel space to avoid depth ambiguities caused by the projection process from 3D to 2D and further enhances the effectiveness of feature updating through the utilization of prior mask indices. Moreover, we customize two subnetworks: a residual voxel network and a sparse UNet, to enhance the network’s geometric prediction capabilities and ensure consistent semantic reasoning across varying scales. By doing so, our method achieves performance improvements over state-of-the-art methods on the SemanticKITTI, SSCBench-KITTI-360 and Occ3D-nuScene benchmarks.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"1-13"},"PeriodicalIF":13.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FRFSL: Feature Reconstruction-Based Cross-Domain Few-Shot Learning for Coastal Wetland Hyperspectral Image Classification 基于特征重构的滨海湿地高光谱图像跨域少拍学习
IF 13.7 Pub Date : 2025-12-24 DOI: 10.1109/TIP.2025.3646073
Qixing Yu;Zhongwei Li;Ziqi Xin;Fangming Guo;Guangbo Ren;Jianbu Wang;Zhenggang Bi
Hyperspectral image classification (HSIC) is a valuable method for identifying coastal wetland vegetation, but challenges like environmental complexity and difficulty in distinguishing land cover types make large-scale labeling difficult. Cross-domain few-shot learning (CDFSL) offers a potential solution to limited labeling. Existing CDFSL HSIC methods have made significant progress, but still face challenges like prototype deviation, covariate shifts, and rely on complex domain alignment (DA) methods. To address these issues, a feature reconstruction-based CDFSL (FRFSL) algorithm is proposed. Within FRFSL, a Prototype Calibration Module (PCM) is designed for the prototype deviation, which employs a Bayesian inference-enhanced Gaussian Mixture Model to select reliable query features for prototype reconstruction, aligning the prototypes more closely with the actual distribution. Additionally, a ridge regression closed-form solution is incorporated into the Distance Metric Module (DMM), employing a projection matrix for prototype reconstruction to mitigate covariate shifts between the support and query sets. Features from both source and target domains are reconstructed into dynamic graphs, transforming DA into a graph matching problem guided by optimal transport theory. A novel shared transport matrix implementation algorithm is developed to achieve lightweight and interpretable alignment. Extensive experiments on three self-constructed coastal wetland datasets and one public dataset show that FRFSL outperforms eleven state-of-the-art algorithms. The code will be available at https://github.com/Yqx-ACE/TIP_2025_FRFSL
高光谱图像分类(HSIC)是识别滨海湿地植被的一种有价值的方法,但环境复杂性和土地覆盖类型难以区分等挑战使得大规模标记变得困难。跨域少射学习(Cross-domain few-shot learning, CDFSL)为有限标注提供了一种潜在的解决方案。现有的CDFSL HSIC方法虽然取得了很大的进展,但仍然面临着原型偏差、协变量偏移、依赖复杂域对齐(DA)方法等挑战。为了解决这些问题,提出了一种基于特征重构的CDFSL (FRFSL)算法。在FRFSL中,针对原型偏差设计了一个原型校准模块(PCM),该模块采用贝叶斯推理增强高斯混合模型选择可靠的查询特征进行原型重建,使原型与实际分布更接近。此外,在距离度量模块(DMM)中加入了脊回归封闭形式的解决方案,采用投影矩阵进行原型重建,以减轻支持集和查询集之间的协变量偏移。将源域和目标域的特征重构为动态图,将数据挖掘转化为基于最优传输理论的图匹配问题。为了实现轻量级、可解释的对齐,提出了一种新的共享传输矩阵实现算法。在3个自建滨海湿地数据集和1个公共数据集上进行的大量实验表明,FRFSL优于11种最先进的算法。代码可在https://github.com/Yqx-ACE/TIP_2025_FRFSL上获得
{"title":"FRFSL: Feature Reconstruction-Based Cross-Domain Few-Shot Learning for Coastal Wetland Hyperspectral Image Classification","authors":"Qixing Yu;Zhongwei Li;Ziqi Xin;Fangming Guo;Guangbo Ren;Jianbu Wang;Zhenggang Bi","doi":"10.1109/TIP.2025.3646073","DOIUrl":"10.1109/TIP.2025.3646073","url":null,"abstract":"Hyperspectral image classification (HSIC) is a valuable method for identifying coastal wetland vegetation, but challenges like environmental complexity and difficulty in distinguishing land cover types make large-scale labeling difficult. Cross-domain few-shot learning (CDFSL) offers a potential solution to limited labeling. Existing CDFSL HSIC methods have made significant progress, but still face challenges like prototype deviation, covariate shifts, and rely on complex domain alignment (DA) methods. To address these issues, a feature reconstruction-based CDFSL (FRFSL) algorithm is proposed. Within FRFSL, a Prototype Calibration Module (PCM) is designed for the prototype deviation, which employs a Bayesian inference-enhanced Gaussian Mixture Model to select reliable query features for prototype reconstruction, aligning the prototypes more closely with the actual distribution. Additionally, a ridge regression closed-form solution is incorporated into the Distance Metric Module (DMM), employing a projection matrix for prototype reconstruction to mitigate covariate shifts between the support and query sets. Features from both source and target domains are reconstructed into dynamic graphs, transforming DA into a graph matching problem guided by optimal transport theory. A novel shared transport matrix implementation algorithm is developed to achieve lightweight and interpretable alignment. Extensive experiments on three self-constructed coastal wetland datasets and one public dataset show that FRFSL outperforms eleven state-of-the-art algorithms. The code will be available at <uri>https://github.com/Yqx-ACE/TIP_2025_FRFSL</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"194-207"},"PeriodicalIF":13.7,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Blind Image Deblurring Based on Cross Partial Derivative 基于交叉偏导数的快速盲图像去模糊
IF 13.7 Pub Date : 2025-12-23 DOI: 10.1109/TIP.2025.3645574
Kuan-Chung Ting;Sheng-Jyh Wang;Ruey-Bing Hwang
In this paper, based on second-order cross-partial derivative (CPD), we propose an efficient blind image deblurring algorithm for uniform blur. The proposed method consists of two stages. We first apply a novel blur kernel estimation method to quickly estimate the blur kernel. Then, we use the estimated kernel to perform non-blind deconvolution to restore the image. A key discovery of the proposed kernel estimation method is that the blur kernel information is usually embedded in the cross-partial-derivative (CPD) image of the blurred image. By exploiting this property, we propose a pipeline to extract a set of kernel candidates directly from the CPD image and then select the most suitable kernel as the estimated blur kernel. Since our kernel estimation method can obtain a fairly accurate blur kernel, we can achieve effective image restoration using a relatively simple Tikhonov regularization in the subsequent non-blind deconvolution process. To improve the quality of the restored image, we further adopt an efficient filtering technique to suppress periodic artifacts that may appear in the restored images. Experimental results demonstrate that our algorithm can efficiently restore high-quality sharp images on standard CPUs without relying on GPU acceleration or parallel computation. For blurred images of approximately $800times 800$ resolution, the proposed method can complete image deblurring within 1 to 5 seconds, which is significantly faster than most state-of-the-art methods. Our MATLAB codes are available at https://github.com/e11tkcee06-a11y/CPD-Deblur.git.
本文提出了一种基于二阶交叉偏导数(CPD)的均匀模糊盲图像去模糊算法。该方法分为两个阶段。首先,我们采用一种新的模糊核估计方法来快速估计模糊核。然后,我们使用估计的核进行非盲反卷积来恢复图像。提出的核估计方法的一个关键发现是模糊核信息通常嵌入在模糊图像的交叉偏导数(CPD)图像中。利用这一特性,我们提出了一种直接从CPD图像中提取一组候选核的管道,然后选择最合适的核作为估计的模糊核。由于我们的核估计方法可以获得相当精确的模糊核,因此我们可以在随后的非盲反卷积过程中使用相对简单的Tikhonov正则化实现有效的图像恢复。为了提高恢复图像的质量,我们进一步采用了一种有效的滤波技术来抑制可能出现在恢复图像中的周期性伪影。实验结果表明,该算法可以在不依赖GPU加速和并行计算的情况下,在标准cpu上有效地恢复高质量的清晰图像。对于大约$800 × 800$分辨率的模糊图像,所提出的方法可以在1到5秒内完成图像去模糊,这比大多数最先进的方法要快得多。我们的MATLAB代码可在https://github.com/e11tkcee06-a11y/CPD-Deblur.git获得。
{"title":"Fast Blind Image Deblurring Based on Cross Partial Derivative","authors":"Kuan-Chung Ting;Sheng-Jyh Wang;Ruey-Bing Hwang","doi":"10.1109/TIP.2025.3645574","DOIUrl":"10.1109/TIP.2025.3645574","url":null,"abstract":"In this paper, based on second-order cross-partial derivative (CPD), we propose an efficient blind image deblurring algorithm for uniform blur. The proposed method consists of two stages. We first apply a novel blur kernel estimation method to quickly estimate the blur kernel. Then, we use the estimated kernel to perform non-blind deconvolution to restore the image. A key discovery of the proposed kernel estimation method is that the blur kernel information is usually embedded in the cross-partial-derivative (CPD) image of the blurred image. By exploiting this property, we propose a pipeline to extract a set of kernel candidates directly from the CPD image and then select the most suitable kernel as the estimated blur kernel. Since our kernel estimation method can obtain a fairly accurate blur kernel, we can achieve effective image restoration using a relatively simple Tikhonov regularization in the subsequent non-blind deconvolution process. To improve the quality of the restored image, we further adopt an efficient filtering technique to suppress periodic artifacts that may appear in the restored images. Experimental results demonstrate that our algorithm can efficiently restore high-quality sharp images on standard CPUs without relying on GPU acceleration or parallel computation. For blurred images of approximately <inline-formula> <tex-math>$800times 800$ </tex-math></inline-formula> resolution, the proposed method can complete image deblurring within 1 to 5 seconds, which is significantly faster than most state-of-the-art methods. Our MATLAB codes are available at <uri>https://github.com/e11tkcee06-a11y/CPD-Deblur.git</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"8627-8640"},"PeriodicalIF":13.7,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145812858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Unified Co-Speech Gesture Generation via Hierarchical Implicit Periodicity Learning 基于层次隐式周期学习的统一共语音手势生成
IF 13.7 Pub Date : 2025-12-23 DOI: 10.1109/TIP.2025.3645572
Xin Guo;Yifan Zhao;Jia Li
Generating 3D-based body movements from speech shows great potential in extensive downstream applications, while it still suffers challenges in imitating realistic human movements. Predominant research efforts focus on end-to-end generation schemes to generate co-speech gestures, spanning GANs, VQ-VAE, and recent diffusion models. As an ill-posed problem, in this paper, we argue that these prevailing learning schemes fail to model crucial inter- and intra-correlations across different motion units, i.e. head, body, and hands, thus leading to unnatural movements and poor coordination. To delve into these intrinsic correlations, we propose a unified Hierarchical Implicit Periodicity (HIP) learning approach for audio-inspired 3D gesture generation. Different from predominant research, our approach models this multi-modal implicit relationship by two explicit technique insights: i) To disentangle the complicated gesture movements, we first explore the gesture motion phase manifolds with periodic autoencoders to imitate human natures from realistic distributions while incorporating non-period ones from current latent states for instance-level diversities. ii) To model the hierarchical relationship of face motions, body gestures, and hand movements, driving the animation with cascaded guidance during learning. We exhibit our proposed approach on 3D avatars and extensive experiments show our method outperforms the state-of-the-art co-speech gesture generation methods by both quantitative and qualitative evaluations. Code and models will be publicly available.
从语音中生成基于3d的身体运动在广泛的下游应用中显示出巨大的潜力,但在模仿真实的人体运动方面仍然面临挑战。主要的研究工作集中在端到端生成方案,以生成共同语音手势,跨越gan, VQ-VAE和最近的扩散模型。作为一个不适定问题,在本文中,我们认为这些流行的学习方案未能模拟不同运动单位(即头部,身体和手)之间的关键相互关系和内部相关性,从而导致不自然的运动和协调性差。为了深入研究这些内在的相关性,我们提出了一种统一的层次隐式周期性(HIP)学习方法,用于音频启发的3D手势生成。与主流研究不同的是,我们的方法通过两个明确的技术见解来建模这种多模态隐含关系:i)为了分离复杂的手势运动,我们首先使用周期性自编码器探索手势运动相位流形,以模仿现实分布中的人性,同时结合当前潜在状态的非周期流形以实现实例级多样性。ii)对面部动作、肢体动作和手部动作的层次关系进行建模,在学习过程中通过级联引导驱动动画。我们在3D化身上展示了我们提出的方法,大量的实验表明,我们的方法在定量和定性评估方面都优于最先进的共语音手势生成方法。代码和模型将是公开可用的。
{"title":"Toward Unified Co-Speech Gesture Generation via Hierarchical Implicit Periodicity Learning","authors":"Xin Guo;Yifan Zhao;Jia Li","doi":"10.1109/TIP.2025.3645572","DOIUrl":"10.1109/TIP.2025.3645572","url":null,"abstract":"Generating 3D-based body movements from speech shows great potential in extensive downstream applications, while it still suffers challenges in imitating realistic human movements. Predominant research efforts focus on end-to-end generation schemes to generate co-speech gestures, spanning GANs, VQ-VAE, and recent diffusion models. As an ill-posed problem, in this paper, we argue that these prevailing learning schemes fail to model crucial inter- and intra-correlations across different motion units, i.e. head, body, and hands, thus leading to unnatural movements and poor coordination. To delve into these intrinsic correlations, we propose a unified Hierarchical Implicit Periodicity (HIP) learning approach for audio-inspired 3D gesture generation. Different from predominant research, our approach models this multi-modal implicit relationship by two explicit technique insights: i) To disentangle the complicated gesture movements, we first explore the gesture motion phase manifolds with periodic autoencoders to imitate human natures from realistic distributions while incorporating non-period ones from current latent states for instance-level diversities. ii) To model the hierarchical relationship of face motions, body gestures, and hand movements, driving the animation with cascaded guidance during learning. We exhibit our proposed approach on 3D avatars and extensive experiments show our method outperforms the state-of-the-art co-speech gesture generation methods by both quantitative and qualitative evaluations. Code and models will be publicly available.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"208-220"},"PeriodicalIF":13.7,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145812856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cosine Network for Image Super-Resolution 图像超分辨率的余弦网络
IF 13.7 Pub Date : 2025-12-23 DOI: 10.1109/TIP.2025.3645630
Chunwei Tian;Chengyuan Zhang;Bob Zhang;Zhiwu Li;C. L. Philip Chen;David Zhang
Deep convolutional neural networks can use hierarchical information to progressively extract structural information to recover high-quality images. However, preserving the effectiveness of the obtained structural information is important in image super-resolution. In this paper, we propose a cosine network for image super-resolution (CSRNet) by improving a network architecture and optimizing the training strategy. To extract complementary homologous structural information, odd and even heterogeneous blocks are designed to enlarge the architectural differences and improve the performance of image super-resolution. Combining linear and non-linear structural information can overcome the drawback of homologous information and enhance the robustness of the obtained structural information in image super-resolution. Taking into account the local minimum of gradient descent, a cosine annealing mechanism is used to optimize the training procedure by performing warm restarts and adjusting the learning rate. Experimental results illustrate that the proposed CSRNet is competitive with state-of-the-art methods in image super-resolution.
深度卷积神经网络可以利用分层信息逐步提取结构信息来恢复高质量的图像。然而,保持所获得的结构信息的有效性对图像的超分辨率至关重要。本文通过改进网络结构和优化训练策略,提出了一种图像超分辨率余弦网络(CSRNet)。为了提取互补的同源结构信息,设计了奇偶异构块,扩大了结构差异,提高了图像超分辨率的性能。将线性和非线性结构信息相结合,可以克服同源信息的缺点,增强获得的结构信息在图像超分辨率中的鲁棒性。考虑梯度下降的局部最小值,采用余弦退火机制,通过热重启和调整学习率来优化训练过程。实验结果表明,所提出的CSRNet在图像超分辨率方面与目前最先进的方法具有竞争力。
{"title":"A Cosine Network for Image Super-Resolution","authors":"Chunwei Tian;Chengyuan Zhang;Bob Zhang;Zhiwu Li;C. L. Philip Chen;David Zhang","doi":"10.1109/TIP.2025.3645630","DOIUrl":"10.1109/TIP.2025.3645630","url":null,"abstract":"Deep convolutional neural networks can use hierarchical information to progressively extract structural information to recover high-quality images. However, preserving the effectiveness of the obtained structural information is important in image super-resolution. In this paper, we propose a cosine network for image super-resolution (CSRNet) by improving a network architecture and optimizing the training strategy. To extract complementary homologous structural information, odd and even heterogeneous blocks are designed to enlarge the architectural differences and improve the performance of image super-resolution. Combining linear and non-linear structural information can overcome the drawback of homologous information and enhance the robustness of the obtained structural information in image super-resolution. Taking into account the local minimum of gradient descent, a cosine annealing mechanism is used to optimize the training procedure by performing warm restarts and adjusting the learning rate. Experimental results illustrate that the proposed CSRNet is competitive with state-of-the-art methods in image super-resolution.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"305-316"},"PeriodicalIF":13.7,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145812860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1