首页 > 最新文献

IEEE Transactions on Image Processing最新文献

英文 中文
A Few-Shot Class Incremental Learning Method Using Graph Neural Networks. 基于图神经网络的几次类增量学习方法。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-28 DOI: 10.1109/tip.2026.3657170
Yuqian Ma,Youfa Liu,Bo Du
Few-shot class incremental learning (FSCIL) aims to continuously learn new classes from limited training samples while retaining previously acquired knowledge. Existing approaches are not fully capable of balancing stability and plasticity in dynamic scenarios. To overcome this limitation, we introduce a novel FSCIL framework that leverages graph neural networks (GNNs) to model interdependencies between different categories and enhance cross-modal alignment. Our framework incorporates three key components: (1) a Graph Isomorphism Network (GIN) to propagate contextual relationships among prompts; (2) a Hamiltonian Graph Network with Energy Conservation (HGN-EC) to stabilize training dynamics via energy conservation constraints; and (3) an Adversarially Constrained Graph Autoencoder (ACGA) to enforce latent space consistency. By integrating these components with a parameter-efficient CLIP backbone, our method dynamically adapts graph structures to model semantic correlations between textual and visual modalities. Additionally, contrastive learning with energy-based regularization is employed to mitigate catastrophic forgetting and improve generalization. Comprehensive experiments on benchmark datasets validate the framework's incremental accuracy and stability compared to state-of-the-art baselines. This work advances FSCIL by unifying graph-based relational reasoning with physics-inspired optimization, offering a scalable and interpretable framework.
Few-shot class incremental learning (FSCIL)旨在从有限的训练样本中不断学习新的类,同时保留先前获得的知识。现有的方法不能完全平衡动态场景下的稳定性和可塑性。为了克服这一限制,我们引入了一种新的FSCIL框架,该框架利用图神经网络(gnn)来模拟不同类别之间的相互依赖关系,并增强跨模态对齐。我们的框架包含三个关键组件:(1)在提示符之间传播上下文关系的图同构网络(GIN);(2)利用能量守恒约束稳定训练动态的哈密顿图网络(HGN-EC);(3)采用对抗约束图自编码器(ACGA)来增强潜在空间一致性。通过将这些组件与参数高效的CLIP主干集成,我们的方法动态地调整图结构来建模文本和视觉模式之间的语义相关性。此外,采用基于能量的正则化对比学习来减轻灾难性遗忘和提高泛化。在基准数据集上的综合实验验证了该框架与最先进的基线相比的增量精度和稳定性。这项工作通过将基于图的关系推理与物理启发的优化结合起来,提供了一个可扩展和可解释的框架,从而推动了FSCIL的发展。
{"title":"A Few-Shot Class Incremental Learning Method Using Graph Neural Networks.","authors":"Yuqian Ma,Youfa Liu,Bo Du","doi":"10.1109/tip.2026.3657170","DOIUrl":"https://doi.org/10.1109/tip.2026.3657170","url":null,"abstract":"Few-shot class incremental learning (FSCIL) aims to continuously learn new classes from limited training samples while retaining previously acquired knowledge. Existing approaches are not fully capable of balancing stability and plasticity in dynamic scenarios. To overcome this limitation, we introduce a novel FSCIL framework that leverages graph neural networks (GNNs) to model interdependencies between different categories and enhance cross-modal alignment. Our framework incorporates three key components: (1) a Graph Isomorphism Network (GIN) to propagate contextual relationships among prompts; (2) a Hamiltonian Graph Network with Energy Conservation (HGN-EC) to stabilize training dynamics via energy conservation constraints; and (3) an Adversarially Constrained Graph Autoencoder (ACGA) to enforce latent space consistency. By integrating these components with a parameter-efficient CLIP backbone, our method dynamically adapts graph structures to model semantic correlations between textual and visual modalities. Additionally, contrastive learning with energy-based regularization is employed to mitigate catastrophic forgetting and improve generalization. Comprehensive experiments on benchmark datasets validate the framework's incremental accuracy and stability compared to state-of-the-art baselines. This work advances FSCIL by unifying graph-based relational reasoning with physics-inspired optimization, offering a scalable and interpretable framework.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"52 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BP-NeRF: End-to-End Neural Radiance Fields for Sparse Images without Camera Pose in Complex Scenes. BP-NeRF:复杂场景中无相机姿态的稀疏图像的端到端神经辐射场。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-28 DOI: 10.1109/tip.2026.3657188
Yaru Qiu,Guoxia Wu,Yuanyuan Sun
Synthesizing novel perspectives of complex scenes in high quality using sparse image sequences, especially for those without camera poses, is a challenging task. The key to enhancing accuracy in such scenarios lies in sufficient prior knowledge and accurate camera motion constraints. Therefore, we propose an end-to-end novel view synthesis network named BP-NeRF. It is capable of using sequences of sparse images captured in indoor and outdoor complex scenes to estimate camera motion trajectories and generate novel view images. Firstly, to address the issue of inaccurate prediction of depth map caused by insufficient overlapping features in sparse images, we designed the RDP-Net module to generate depth maps for sparse image sequences and calculate the depth accuracy of these maps, providing the network with a reliable depth prior. Secondly, to enhance the accuracy of camera pose estimation, we construct a loss function based on the geometric consistency of 2D and 3D feature variations between frames, improving the accuracy and robustness of the network's estimations. We conducted experimental evaluations on the LLFF and Tanks datasets, and the results show that, compared to the current mainstream methods, BP-NeRF can generate more accurate novel views without camera poses.
利用稀疏图像序列合成高质量复杂场景的新视角,特别是对于那些没有相机姿势的场景,是一项具有挑战性的任务。在这种情况下提高精度的关键在于充分的先验知识和准确的摄像机运动约束。因此,我们提出了一种新型的端到端视图合成网络BP-NeRF。它能够使用在室内和室外复杂场景中捕获的稀疏图像序列来估计相机运动轨迹并生成新的视图图像。首先,针对稀疏图像中重叠特征不足导致深度图预测不准确的问题,我们设计了RDP-Net模块,对稀疏图像序列生成深度图,并计算深度图的深度精度,为网络提供可靠的深度先验。其次,为了提高摄像机姿态估计的精度,我们基于帧间二维和三维特征变化的几何一致性构造了一个损失函数,提高了网络估计的精度和鲁棒性。我们对LLFF和Tanks数据集进行了实验评估,结果表明,与目前的主流方法相比,BP-NeRF可以在不需要相机姿态的情况下生成更准确的新视图。
{"title":"BP-NeRF: End-to-End Neural Radiance Fields for Sparse Images without Camera Pose in Complex Scenes.","authors":"Yaru Qiu,Guoxia Wu,Yuanyuan Sun","doi":"10.1109/tip.2026.3657188","DOIUrl":"https://doi.org/10.1109/tip.2026.3657188","url":null,"abstract":"Synthesizing novel perspectives of complex scenes in high quality using sparse image sequences, especially for those without camera poses, is a challenging task. The key to enhancing accuracy in such scenarios lies in sufficient prior knowledge and accurate camera motion constraints. Therefore, we propose an end-to-end novel view synthesis network named BP-NeRF. It is capable of using sequences of sparse images captured in indoor and outdoor complex scenes to estimate camera motion trajectories and generate novel view images. Firstly, to address the issue of inaccurate prediction of depth map caused by insufficient overlapping features in sparse images, we designed the RDP-Net module to generate depth maps for sparse image sequences and calculate the depth accuracy of these maps, providing the network with a reliable depth prior. Secondly, to enhance the accuracy of camera pose estimation, we construct a loss function based on the geometric consistency of 2D and 3D feature variations between frames, improving the accuracy and robustness of the network's estimations. We conducted experimental evaluations on the LLFF and Tanks datasets, and the results show that, compared to the current mainstream methods, BP-NeRF can generate more accurate novel views without camera poses.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain-Adaptive Mamba for Cross-Scene Hyperspectral Image Classification. 跨场景高光谱图像分类的区域自适应曼巴算法。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-28 DOI: 10.1109/tip.2026.3657209
Puhong Duan,Shiyu Jin,Xiaotian Lu,Lianhui Liang,Xudong Kang,Antonio Plaza
Cross-scene hyperspectral image classification aims to identify a new scene in target domain via learned knowledge from source domain using limited training samples. Existing cross-scene alignment approaches focus on aligning the global feature distribution between the source and target domains while overlooking the fine-grained alignment at different levels. Moreover, they mainly use Transformer architectures to model long-range dependencies across different channels but confront efficiency challenges due to their quadratic complexity, which limits classification performance in unsupervised domain adaptation tasks. To address these issues, a new domain-adaptive Mamba (DAMamba) is proposed for cross-scene hyperspectral image classification. First, a spectral-spatial Mamba is developed to extract high-order semantic features from the input data. Then, a domain-invariant prototype alignment method is proposed from three perspectives, i.e., intra-domain, inter-domain, and mini-batch, to produce reliable pseudo-labels and mitigate the spectral shift between the source and target domains. Finally, a fully connected layer is applied to the aligned features in the target domain to obtain the final classification results. Extensive evaluations across diverse cross-scene datasets demonstrate that our DAMamba outperforms existing state-of-the-art methods in classification accuracy and computing time. The code of this paper is available at https://github.com/PuhongDuan/DAMamba.
跨场景高光谱图像分类的目的是利用有限的训练样本,通过从源域学习到的知识,在目标域中识别出一个新的场景。现有的跨场景对齐方法侧重于对齐源域和目标域之间的全局特征分布,而忽略了不同层次的细粒度对齐。此外,它们主要使用Transformer架构对跨不同通道的远程依赖关系进行建模,但由于其二次复杂度而面临效率挑战,这限制了在无监督域自适应任务中的分类性能。为了解决这些问题,提出了一种新的域自适应曼巴(DAMamba)算法用于跨场景高光谱图像分类。首先,开发了一种频谱空间曼巴算法,从输入数据中提取高阶语义特征。然后,从域内、域间和小批量三个角度提出了一种域不变原型对准方法,以产生可靠的伪标签,减轻源域和目标域之间的谱偏移。最后,对目标域中对齐的特征进行全连通层处理,得到最终的分类结果。对不同跨场景数据集的广泛评估表明,我们的DAMamba在分类精度和计算时间方面优于现有的最先进的方法。本文的代码可在https://github.com/PuhongDuan/DAMamba上获得。
{"title":"Domain-Adaptive Mamba for Cross-Scene Hyperspectral Image Classification.","authors":"Puhong Duan,Shiyu Jin,Xiaotian Lu,Lianhui Liang,Xudong Kang,Antonio Plaza","doi":"10.1109/tip.2026.3657209","DOIUrl":"https://doi.org/10.1109/tip.2026.3657209","url":null,"abstract":"Cross-scene hyperspectral image classification aims to identify a new scene in target domain via learned knowledge from source domain using limited training samples. Existing cross-scene alignment approaches focus on aligning the global feature distribution between the source and target domains while overlooking the fine-grained alignment at different levels. Moreover, they mainly use Transformer architectures to model long-range dependencies across different channels but confront efficiency challenges due to their quadratic complexity, which limits classification performance in unsupervised domain adaptation tasks. To address these issues, a new domain-adaptive Mamba (DAMamba) is proposed for cross-scene hyperspectral image classification. First, a spectral-spatial Mamba is developed to extract high-order semantic features from the input data. Then, a domain-invariant prototype alignment method is proposed from three perspectives, i.e., intra-domain, inter-domain, and mini-batch, to produce reliable pseudo-labels and mitigate the spectral shift between the source and target domains. Finally, a fully connected layer is applied to the aligned features in the target domain to obtain the final classification results. Extensive evaluations across diverse cross-scene datasets demonstrate that our DAMamba outperforms existing state-of-the-art methods in classification accuracy and computing time. The code of this paper is available at https://github.com/PuhongDuan/DAMamba.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"473 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equivariant High-Resolution Hyperspectral Imaging via Mosaiced and PAN Image Fusion. 基于拼接和PAN图像融合的等变高光谱成像。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-28 DOI: 10.1109/tip.2026.3657219
Nan Wang,Anjing Guo,Renwei Dian,Shutao Li
Existing mosaic-based snapshot hyperspectral imaging systems struggle to capture high resolution (HR) hyperspectral image (HSI), limiting its application. Fusing a low resolution (LR) mosaiced image with an HR panchromatic (PAN) image serves as a feasible solution to obtain the HR HSI. Therefore, we propose a dual-sensor based HSI imaging system, combining a 4×4 spectral filter array (SFA) mosaiced image sensor with a co-aligned PAN image sensor to provide complementary spatial-spectral information. To reconstruct HR HSI, we propose an unsupervised equivariant imaging (EI)-based training framework with a learnable degradation function, overcoming the inaccessibility of ground truth and spectral response function (SRF). Specifically, we formulate the degradation process as a combination of 8×8 mosaicing and 2×2 average downsampling for the LR mosaiced image, while modeling the PAN image as a linear projection of the HR HSI using SRF. Since parameters of SRF are inaccessible, we propose to make them learnable to have an accurate estimation. By enforcing transformation equivariance between the input-output pair of the fusion network, the proposed framework ensures the reconstructed HSI preserves spatial-spectral consistency without relying on paired supervision. Furthermore, we instantiate the proposed HSI imaging system and collect a real-world dataset of 60 paired mosaiced / PAN images. The mosaiced image exhibits 16 spectral bands ranging from 722 to 896 nm and 1020×1104 spatial pixels while the PAN image exhibits 2040×2208 spatial pixels. Comprehensive experiments demonstrate that the proposed method exhibits high spatial consistency and spectral fidelity while maintaining computational efficiency.
现有的基于马赛克的快照高光谱成像系统难以捕获高分辨率高光谱图像,限制了其应用。将低分辨率(LR)拼接图像与HR全色(PAN)图像融合是获得HR HSI的一种可行方案。因此,我们提出了一种基于双传感器的HSI成像系统,将4×4光谱滤波阵列(SFA)拼接图像传感器与共对准PAN图像传感器相结合,以提供互补的空间光谱信息。为了重建HR HSI,我们提出了一个基于无监督等变成像(EI)的训练框架,该框架具有可学习的退化函数,克服了地面真值和谱响应函数(SRF)的不可达性。具体来说,我们将退化过程描述为对LR拼接图像的8×8拼接和2×2平均下采样的组合,同时使用SRF将PAN图像建模为HR HSI的线性投影。由于SRF的参数是不可接近的,我们建议使其可学习以获得准确的估计。该框架通过增强融合网络输入输出对之间的变换等方差,确保重构的HSI在不依赖于配对监督的情况下保持空间-频谱一致性。此外,我们实例化了所提出的HSI成像系统,并收集了60对拼接/ PAN图像的真实数据集。拼接图像具有722 ~ 896 nm的16个光谱带,空间像元为1020×1104,而PAN图像具有2040×2208。综合实验表明,该方法在保持计算效率的同时,具有较高的空间一致性和频谱保真度。
{"title":"Equivariant High-Resolution Hyperspectral Imaging via Mosaiced and PAN Image Fusion.","authors":"Nan Wang,Anjing Guo,Renwei Dian,Shutao Li","doi":"10.1109/tip.2026.3657219","DOIUrl":"https://doi.org/10.1109/tip.2026.3657219","url":null,"abstract":"Existing mosaic-based snapshot hyperspectral imaging systems struggle to capture high resolution (HR) hyperspectral image (HSI), limiting its application. Fusing a low resolution (LR) mosaiced image with an HR panchromatic (PAN) image serves as a feasible solution to obtain the HR HSI. Therefore, we propose a dual-sensor based HSI imaging system, combining a 4×4 spectral filter array (SFA) mosaiced image sensor with a co-aligned PAN image sensor to provide complementary spatial-spectral information. To reconstruct HR HSI, we propose an unsupervised equivariant imaging (EI)-based training framework with a learnable degradation function, overcoming the inaccessibility of ground truth and spectral response function (SRF). Specifically, we formulate the degradation process as a combination of 8×8 mosaicing and 2×2 average downsampling for the LR mosaiced image, while modeling the PAN image as a linear projection of the HR HSI using SRF. Since parameters of SRF are inaccessible, we propose to make them learnable to have an accurate estimation. By enforcing transformation equivariance between the input-output pair of the fusion network, the proposed framework ensures the reconstructed HSI preserves spatial-spectral consistency without relying on paired supervision. Furthermore, we instantiate the proposed HSI imaging system and collect a real-world dataset of 60 paired mosaiced / PAN images. The mosaiced image exhibits 16 spectral bands ranging from 722 to 896 nm and 1020×1104 spatial pixels while the PAN image exhibits 2040×2208 spatial pixels. Comprehensive experiments demonstrate that the proposed method exhibits high spatial consistency and spectral fidelity while maintaining computational efficiency.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"42 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Frequencies via Feature Mixing and Meta-Learning for Improving Adversarial Transferability 通过特征混合和元学习探索频率以提高对抗可转移性
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-28 DOI: 10.1109/tip.2026.3657166
Juanjuan Weng, Zhiming Luo, Shaozi Li
{"title":"Exploring Frequencies via Feature Mixing and Meta-Learning for Improving Adversarial Transferability","authors":"Juanjuan Weng, Zhiming Luo, Shaozi Li","doi":"10.1109/tip.2026.3657166","DOIUrl":"https://doi.org/10.1109/tip.2026.3657166","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"55 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146070601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual & Common Attack: Enhancing Transferability in VLP Models through Modal Feature Exploitation. 个体与共同攻击:利用模态特征增强VLP模型的可移植性。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-28 DOI: 10.1109/tip.2026.3651982
Yaguan Qian,Yaxin Kong,Qiqi Bao,Zhaoquan Gu,Bin Wang,Shouling Ji,Jianping Zhang,Zhen Lei
Vision-Language Pretrained (VLP) models exhibit strong multimodal understanding and reasoning capabilities, finding wide application in tasks such as image-text retrieval and visual grounding. However, they remain highly vulnerable to adversarial attacks, posing serious reliability concerns in safety-critical scenarios. We observe that existing adversarial examples optimization methods typically rely on individual features from the other modality as guidance, causing the crafted adversarial examples to overfit that modality's learning preferences and thus limiting their transferability. In order to further enhance the transferability of adversarial examples, we propose a novel adversarial attack framework, I&CA (Individual & Common feature Attack), which simultaneously considers individual features within each modality and common features cross-modal interactions. Concretely, I&CA first drives divergence among individual features within each modality to disrupt single-modality learning, and then suppresses the expression of common features during cross-modal interactions, thereby undermining the robustness of the fusion mechanism. In addition, to prevent adversarial perturbations from overfitting to the learning bias of the other modality, which may distort the representation of common features, we simultaneously introduce augmentation strategies to both modalities. Across various experimental settings and widely recognized multimodal benchmarks, the I&CA framework achieves an average transferability improvement of 6.15% over the state-of-the-art DRA method, delivering significant performance gains in both cross-model and cross-task attack scenarios.
视觉语言预训练(VLP)模型表现出强大的多模态理解和推理能力,在图像文本检索和视觉基础等任务中得到广泛应用。然而,它们仍然极易受到对抗性攻击,在安全关键场景中引发严重的可靠性问题。我们观察到,现有的对抗性示例优化方法通常依赖于其他模态的单个特征作为指导,导致精心制作的对抗性示例过度拟合该模态的学习偏好,从而限制了它们的可转移性。为了进一步增强对抗性示例的可转移性,我们提出了一种新的对抗性攻击框架I&CA (Individual & Common feature attack),该框架同时考虑了每个模态中的个体特征和跨模态交互的共同特征。具体而言,I&CA首先驱动每个模态中单个特征之间的分歧,从而破坏单模态学习,然后抑制跨模态交互过程中共同特征的表达,从而破坏融合机制的鲁棒性。此外,为了防止对抗性扰动过度拟合到其他模态的学习偏差,这可能会扭曲共同特征的表示,我们同时向两种模态引入增强策略。在各种实验设置和广泛认可的多模态基准测试中,I&CA框架实现了比最先进的DRA方法平均可转移性提高6.15%,在跨模型和跨任务攻击场景中都提供了显着的性能提升。
{"title":"Individual & Common Attack: Enhancing Transferability in VLP Models through Modal Feature Exploitation.","authors":"Yaguan Qian,Yaxin Kong,Qiqi Bao,Zhaoquan Gu,Bin Wang,Shouling Ji,Jianping Zhang,Zhen Lei","doi":"10.1109/tip.2026.3651982","DOIUrl":"https://doi.org/10.1109/tip.2026.3651982","url":null,"abstract":"Vision-Language Pretrained (VLP) models exhibit strong multimodal understanding and reasoning capabilities, finding wide application in tasks such as image-text retrieval and visual grounding. However, they remain highly vulnerable to adversarial attacks, posing serious reliability concerns in safety-critical scenarios. We observe that existing adversarial examples optimization methods typically rely on individual features from the other modality as guidance, causing the crafted adversarial examples to overfit that modality's learning preferences and thus limiting their transferability. In order to further enhance the transferability of adversarial examples, we propose a novel adversarial attack framework, I&CA (Individual & Common feature Attack), which simultaneously considers individual features within each modality and common features cross-modal interactions. Concretely, I&CA first drives divergence among individual features within each modality to disrupt single-modality learning, and then suppresses the expression of common features during cross-modal interactions, thereby undermining the robustness of the fusion mechanism. In addition, to prevent adversarial perturbations from overfitting to the learning bias of the other modality, which may distort the representation of common features, we simultaneously introduce augmentation strategies to both modalities. Across various experimental settings and widely recognized multimodal benchmarks, the I&CA framework achieves an average transferability improvement of 6.15% over the state-of-the-art DRA method, delivering significant performance gains in both cross-model and cross-task attack scenarios.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"183 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Multi-Focus Image Fusion: An Input Space Optimisation View. 重新思考多焦点图像融合:输入空间优化视图。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1109/tip.2026.3654370
Zeyu Wang,Shuang Yu,Haoran Duan,Shidong Wang,Yang Long,Ling Shao
Multi-focus image fusion (MFIF) addresses the challenge of partial focus by integrating multiple source images taken at different focal depths. Unlike most existing methods that rely on complex loss functions or large-scale synthetic datasets, this study approaches MFIF from a novel perspective: optimizing the input space. The core idea is to construct a high-quality MFIF input space in a cost-effective manner by using intermediate features from well-trained, non-MFIF networks. To this end, we propose a cascaded framework comprising two feature extractors, a Feature Distillation and Fusion Module (FDFM), and a focus segmentation network YUNet. Based on our observation that discrepancy and edge features are essential for MFIF, we select a image deblurring network and a salient object detection network as feature extractors. To transform these extracted features into an MFIF-suitable input space, we propose FDFM as a training-free feature adapter. To make FDFM compatible with high-dimensional feature maps, we extend the manifold theory from the edge-preserving field and design a novel isometric domain transformation. Extensive experiments on six benchmark datasets show that (i) our model consistently outperforms 13 state-of-the-art methods in both qualitative and quantitative evaluations, and (ii) the constructed input space can directly enhance the performance of many MFIF models without additional requirements.
多焦点图像融合(MFIF)通过对不同焦深的多源图像进行融合,解决了局部聚焦问题。与大多数依赖于复杂损失函数或大规模合成数据集的现有方法不同,本研究从一个新的角度来研究MFIF:优化输入空间。核心思想是通过使用训练良好的非MFIF网络的中间特征,以经济有效的方式构建高质量的MFIF输入空间。为此,我们提出了一个级联框架,包括两个特征提取器,一个特征蒸馏和融合模块(FDFM)和一个焦点分割网络YUNet。基于我们观察到差异和边缘特征对MFIF至关重要,我们选择了图像去模糊网络和显著目标检测网络作为特征提取器。为了将这些提取的特征转换为适合mfif的输入空间,我们提出了FDFM作为不需要训练的特征适配器。为了使FDFM与高维特征映射兼容,我们从边缘保持领域扩展了流形理论,设计了一种新的等距域变换。在六个基准数据集上进行的大量实验表明:(i)我们的模型在定性和定量评估方面始终优于13种最先进的方法,(ii)构建的输入空间可以直接提高许多MFIF模型的性能,而无需额外的要求。
{"title":"Rethinking Multi-Focus Image Fusion: An Input Space Optimisation View.","authors":"Zeyu Wang,Shuang Yu,Haoran Duan,Shidong Wang,Yang Long,Ling Shao","doi":"10.1109/tip.2026.3654370","DOIUrl":"https://doi.org/10.1109/tip.2026.3654370","url":null,"abstract":"Multi-focus image fusion (MFIF) addresses the challenge of partial focus by integrating multiple source images taken at different focal depths. Unlike most existing methods that rely on complex loss functions or large-scale synthetic datasets, this study approaches MFIF from a novel perspective: optimizing the input space. The core idea is to construct a high-quality MFIF input space in a cost-effective manner by using intermediate features from well-trained, non-MFIF networks. To this end, we propose a cascaded framework comprising two feature extractors, a Feature Distillation and Fusion Module (FDFM), and a focus segmentation network YUNet. Based on our observation that discrepancy and edge features are essential for MFIF, we select a image deblurring network and a salient object detection network as feature extractors. To transform these extracted features into an MFIF-suitable input space, we propose FDFM as a training-free feature adapter. To make FDFM compatible with high-dimensional feature maps, we extend the manifold theory from the edge-preserving field and design a novel isometric domain transformation. Extensive experiments on six benchmark datasets show that (i) our model consistently outperforms 13 state-of-the-art methods in both qualitative and quantitative evaluations, and (ii) the constructed input space can directly enhance the performance of many MFIF models without additional requirements.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"87 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
U-RWKV: Accurate and Efficient Volumetric Medical Image Segmentation via RWKV. U-RWKV:通过RWKV精确高效的体积医学图像分割。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1109/tip.2026.3654389
Hongyu Cai,Yifan Wang,Liu Wang,Jian Zhao,Zhejun Kuang
Accurate and efficient volumetric medical image segmentation is vital for clinical diagnosis, pre-operative planning, and disease-progression monitoring. Conventional convolutional neural networks (CNNs) struggle to capture long-range contextual information, whereas Transformer-based methods suffer from quadratic computational complexity, making it challenging to couple global modeling with high efficiency. To address these limitations, we explore an effective yet accurate segmentation model for volumetric data. Specifically, we introduce a novel linear-complexity sequence modeling technique, RWKV, and leverage it to design a Tri-directional Spatial Enhancement RWKV (TSE-R) block; this module performs global modeling via RWKV and incorporates two optimizations tailored to three-dimensional data: (1) a spatial-shift strategy that enlarges the local receptive field and facilitates inter-block interaction, thereby alleviating the structural information loss caused by sequence serialization; and (2) a tri-directional scanning mechanism that constructs sequences along three distinct directions, applies global modeling via WKV, and fuses them with learnable weights to preserve the inherent 3D spatial structure. Building upon the TSE-R block, we develop an end-to-end 3D segmentation network, termed U-RWKV, and extensive experiments on three public 3D medical segmentation benchmarks demonstrate that U-RWKV outperforms state-of-the-art CNN-, Transformer-, and Mamba-based counterparts, achieving a Dice score of 87.21% on the Synapse multi-organ abdominal dataset while reducing parameter count by a factor of 16.08 compared with leading methods.
准确、高效的体积医学图像分割对于临床诊断、术前计划和疾病进展监测至关重要。传统的卷积神经网络(cnn)难以捕获远程上下文信息,而基于transformer的方法的计算复杂度是二次的,这使得它难以将全局建模与高效率结合起来。为了解决这些限制,我们探索了一种有效而准确的体积数据分割模型。具体来说,我们引入了一种新的线性复杂度序列建模技术RWKV,并利用它设计了一个三向空间增强RWKV (TSE-R)块;该模块通过RWKV进行全局建模,并结合了针对三维数据的两种优化:(1)空间移位策略,扩大局部感受野,促进块间交互,从而减轻序列序列化带来的结构信息损失;(2)三向扫描机制,沿三个不同的方向构建序列,通过WKV进行全局建模,并将其与可学习的权值融合以保持固有的三维空间结构。在此基础上,我们开发了一个端到端3D分割网络,称为U-RWKV,并在三个公开的3D医学分割基准上进行了广泛的实验,证明U-RWKV优于最先进的基于CNN, Transformer和mamba的同类产品,在Synapse多器官腹部数据集中实现了87.21%的Dice得分,同时与领先的方法相比,参数数量减少了16.08倍。
{"title":"U-RWKV: Accurate and Efficient Volumetric Medical Image Segmentation via RWKV.","authors":"Hongyu Cai,Yifan Wang,Liu Wang,Jian Zhao,Zhejun Kuang","doi":"10.1109/tip.2026.3654389","DOIUrl":"https://doi.org/10.1109/tip.2026.3654389","url":null,"abstract":"Accurate and efficient volumetric medical image segmentation is vital for clinical diagnosis, pre-operative planning, and disease-progression monitoring. Conventional convolutional neural networks (CNNs) struggle to capture long-range contextual information, whereas Transformer-based methods suffer from quadratic computational complexity, making it challenging to couple global modeling with high efficiency. To address these limitations, we explore an effective yet accurate segmentation model for volumetric data. Specifically, we introduce a novel linear-complexity sequence modeling technique, RWKV, and leverage it to design a Tri-directional Spatial Enhancement RWKV (TSE-R) block; this module performs global modeling via RWKV and incorporates two optimizations tailored to three-dimensional data: (1) a spatial-shift strategy that enlarges the local receptive field and facilitates inter-block interaction, thereby alleviating the structural information loss caused by sequence serialization; and (2) a tri-directional scanning mechanism that constructs sequences along three distinct directions, applies global modeling via WKV, and fuses them with learnable weights to preserve the inherent 3D spatial structure. Building upon the TSE-R block, we develop an end-to-end 3D segmentation network, termed U-RWKV, and extensive experiments on three public 3D medical segmentation benchmarks demonstrate that U-RWKV outperforms state-of-the-art CNN-, Transformer-, and Mamba-based counterparts, achieving a Dice score of 87.21% on the Synapse multi-organ abdominal dataset while reducing parameter count by a factor of 16.08 compared with leading methods.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"39 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-Prompted Trustworthy Disentangled Learning for Thyroid Ultrasound Segmentation with Limited Annotations. 知识提示的可信赖解纠缠学习用于有限注释的甲状腺超声分割。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1109/tip.2026.3654413
Wenxu Wang,Weizhen Wang,Qianjin Feng,Yu Zhang,Zhenyuan Ning
The similar textures, diverse shapes and blurred boundaries of thyroid lesions in ultrasound images pose a significant challenge to accurate segmentation. Although several methods have been proposed to alleviate the aforementioned issues, their generalization is hindered by limited annotation data and insufficient ability to distinguish lesion from its surrounding tissues, especially in the presence of noise and outlier. Additionally, most existing methods lack uncertainty estimation which is essential for providing trustworthy results and identifying potential mispredictions. To this end, we propose knowledge-prompted trustworthy disentangled learning (KPTD) for thyroid ultrasound segmentation with limited annotations. The proposed method consists of three key components: 1) Knowledge-aware prompt learning (KAPL) encodes TI-RADS reports into text features and introduces learnable prompts to extract contextual embeddings, which assist in generating region activation maps (serving as pseudo-labels for unlabeled images). 2) Foreground-background disentangled learning (FBDL) leverages region activation maps to disentangle foreground and background representations, refining their prototype distributions through a contrastive learning strategy to enhance the model's discrimination and robustness. 3) Foreground-background trustworthy fusion (FBTF) integrates the foreground and background representations and estimates their uncertainty based on evidence theory, providing trustworthy segmentation results. Experimental results show that KPTD achieves superior segmentation performance under limited annotations, significantly outperforming state-of-the-art methods.
超声图像中甲状腺病变的纹理相似,形状多样,边界模糊,给准确分割带来了很大的挑战。虽然已经提出了几种方法来缓解上述问题,但它们的推广受到有限的注释数据和区分病变与周围组织的能力不足的阻碍,特别是在存在噪声和离群值的情况下。此外,大多数现有方法缺乏不确定性估计,而不确定性估计对于提供可信的结果和识别潜在的错误预测至关重要。为此,我们提出了知识提示的可信解纠缠学习(KPTD)用于甲状腺超声分割有限注释。该方法由三个关键部分组成:1)知识感知提示学习(KAPL)将TI-RADS报告编码为文本特征,并引入可学习的提示来提取上下文嵌入,这有助于生成区域激活图(作为未标记图像的伪标签)。2)前景-背景解纠缠学习(FBDL)利用区域激活映射来解纠缠前景和背景表示,通过对比学习策略来细化它们的原型分布,以增强模型的辨识性和鲁棒性。3)前景-背景可信融合(FBTF)将前景和背景表示进行融合,并基于证据理论估计其不确定性,提供可信分割结果。实验结果表明,KPTD在有限的标注条件下取得了优异的分割性能,明显优于现有的分割方法。
{"title":"Knowledge-Prompted Trustworthy Disentangled Learning for Thyroid Ultrasound Segmentation with Limited Annotations.","authors":"Wenxu Wang,Weizhen Wang,Qianjin Feng,Yu Zhang,Zhenyuan Ning","doi":"10.1109/tip.2026.3654413","DOIUrl":"https://doi.org/10.1109/tip.2026.3654413","url":null,"abstract":"The similar textures, diverse shapes and blurred boundaries of thyroid lesions in ultrasound images pose a significant challenge to accurate segmentation. Although several methods have been proposed to alleviate the aforementioned issues, their generalization is hindered by limited annotation data and insufficient ability to distinguish lesion from its surrounding tissues, especially in the presence of noise and outlier. Additionally, most existing methods lack uncertainty estimation which is essential for providing trustworthy results and identifying potential mispredictions. To this end, we propose knowledge-prompted trustworthy disentangled learning (KPTD) for thyroid ultrasound segmentation with limited annotations. The proposed method consists of three key components: 1) Knowledge-aware prompt learning (KAPL) encodes TI-RADS reports into text features and introduces learnable prompts to extract contextual embeddings, which assist in generating region activation maps (serving as pseudo-labels for unlabeled images). 2) Foreground-background disentangled learning (FBDL) leverages region activation maps to disentangle foreground and background representations, refining their prototype distributions through a contrastive learning strategy to enhance the model's discrimination and robustness. 3) Foreground-background trustworthy fusion (FBTF) integrates the foreground and background representations and estimates their uncertainty based on evidence theory, providing trustworthy segmentation results. Experimental results show that KPTD achieves superior segmentation performance under limited annotations, significantly outperforming state-of-the-art methods.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"66 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topology-Guided Semantic Face Center Estimation for Rotation-Invariant Face Detection. 旋转不变性人脸检测的拓扑引导语义人脸中心估计。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1109/tip.2026.3654422
Hathai Kaewkorn,Lifang Zhou,Weisheng Li,Chengjiang Long
Face detection accuracy significantly decreases under rotational variations, including in-plane (RIP) and out-of-plane (ROP) rotations. ROP is particularly problematic due to its impact on landmark distortion, which leads to inaccurate face center localization. Meanwhile, many existing rotation-invariant models are primarily designed to handle RIP, they often fail under ROP because they lack the ability to capture semantic and topological relationships. Moreover, existing datasets frequently suffer from unreliable landmark annotations caused by imperfect ground truth labeling, the absence of precise center annotations, and imbalanced data across different rotation angles. To address these challenges, we propose a topology-guided semantic face center estimation method that leverages graph-based landmark relationships to preserve structural integrity under both RIP and ROP. Additionally, we construct a rotation-aware face dataset with accurate face center annotations and balanced rotational diversity to support training under extreme pose conditions. Next, we introduce a Hybrid-ViT model that fuses CNN spatial features with transformer-based global context and employ a center-guided module for robust landmark localization under extreme rotations. In order to evaluate center quality, we further design a hybrid metric that combines topological geometry with semantic perception for a more comprehensive evaluation of face center accuracy. Finally, experimental results demonstrate that our method outperforms state-of-the-art models in cross-dataset evaluations. Code: https://github.com/Catster111/TCE_RIFD.
在旋转变化下,包括面内旋转(RIP)和面外旋转(ROP),人脸检测精度显著降低。由于ROP对地标畸变的影响,导致人脸中心定位不准确,因此问题特别严重。同时,许多现有的旋转不变模型主要用于处理RIP,由于缺乏捕获语义和拓扑关系的能力,它们经常在ROP下失败。此外,由于地面真值标注不完善、缺乏精确的中心标注以及不同旋转角度的数据不平衡等原因,现有数据集的地标标注往往不可靠。为了解决这些挑战,我们提出了一种拓扑导向的语义面中心估计方法,该方法利用基于图的地标关系来保持RIP和ROP下的结构完整性。此外,我们构建了一个旋转感知的人脸数据集,该数据集具有准确的人脸中心注释和平衡的旋转多样性,以支持极端姿势条件下的训练。接下来,我们引入了一个Hybrid-ViT模型,该模型融合了CNN空间特征和基于变压器的全局上下文,并采用中心引导模块在极端旋转下进行鲁棒地标定位。为了评估人脸中心质量,我们进一步设计了一种结合拓扑几何和语义感知的混合度量,以更全面地评估人脸中心精度。最后,实验结果表明,我们的方法在跨数据集评估中优于最先进的模型。代码:https://github.com/Catster111/TCE_RIFD。
{"title":"Topology-Guided Semantic Face Center Estimation for Rotation-Invariant Face Detection.","authors":"Hathai Kaewkorn,Lifang Zhou,Weisheng Li,Chengjiang Long","doi":"10.1109/tip.2026.3654422","DOIUrl":"https://doi.org/10.1109/tip.2026.3654422","url":null,"abstract":"Face detection accuracy significantly decreases under rotational variations, including in-plane (RIP) and out-of-plane (ROP) rotations. ROP is particularly problematic due to its impact on landmark distortion, which leads to inaccurate face center localization. Meanwhile, many existing rotation-invariant models are primarily designed to handle RIP, they often fail under ROP because they lack the ability to capture semantic and topological relationships. Moreover, existing datasets frequently suffer from unreliable landmark annotations caused by imperfect ground truth labeling, the absence of precise center annotations, and imbalanced data across different rotation angles. To address these challenges, we propose a topology-guided semantic face center estimation method that leverages graph-based landmark relationships to preserve structural integrity under both RIP and ROP. Additionally, we construct a rotation-aware face dataset with accurate face center annotations and balanced rotational diversity to support training under extreme pose conditions. Next, we introduce a Hybrid-ViT model that fuses CNN spatial features with transformer-based global context and employ a center-guided module for robust landmark localization under extreme rotations. In order to evaluate center quality, we further design a hybrid metric that combines topological geometry with semantic perception for a more comprehensive evaluation of face center accuracy. Finally, experimental results demonstrate that our method outperforms state-of-the-art models in cross-dataset evaluations. Code: https://github.com/Catster111/TCE_RIFD.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"2 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146034070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1