首页 > 最新文献

Image and Vision Computing最新文献

英文 中文
SEAGNet: Spatial–Epipolar–Angular–Global feature learning for light field super-resolution 面向光场超分辨率的空间-极-角-全局特征学习
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-13 DOI: 10.1016/j.imavis.2025.105866
Xingzheng Wang, Haotian Zhang, Yuhang Lin, Yuanbo Huang, Jiahao Lin
In light field (LF) image super-resolution (SR), comprehensive learning of LF information is crucial for accurately recovering image details. Because 4D LF structures are so complex, current methods usually use special convolutions and modules to separately extract different LF characteristics (like spatial, angular, and EPI features) before combining them. But these methods focus too much on local LF information and not enough on global 4D LF features. This makes it hard for them to get better. To overcome this issue, we suggest a straightforward yet effective Global Feature Extraction Module (GFEM). This module extracts the global information from the entire 4D light field. Our method does this by using all of these features together. We also introduce a tool called the Progressive Angular Feature Extractor (PAFE), which gradually expands the area that extracts features to make sure it can extract features at different angles. We also designed a Spatial Gated Feed-forward Network (SGFN) to replace the standard feed-forward network in Transformer. This has resulted in our new Intra-Transformer architecture, which optimizes feature flow and enhances local detail extraction. We did a lot of experiments on different public datasets, and these showed that our method is better than other methods that are currently available.
在光场图像超分辨率(SR)中,全面学习光场信息是准确恢复图像细节的关键。由于4D LF结构非常复杂,目前的方法通常使用特殊的卷积和模块分别提取不同的LF特征(如空间特征、角度特征和EPI特征),然后再组合。但这些方法过于关注局部LF信息,而对全局4D LF特征关注不够。这使得他们很难变得更好。为了克服这个问题,我们提出了一个简单而有效的全局特征提取模块(GFEM)。该模块从整个4D光场中提取全局信息。我们的方法通过将所有这些特征结合在一起来实现这一点。我们还引入了渐进式角度特征提取器(Progressive Angular Feature Extractor, PAFE)工具,它逐步扩大提取特征的区域,以确保能够提取不同角度的特征。我们还设计了一个空间门控前馈网络(SGFN)来取代变压器中的标准前馈网络。这导致了我们新的Intra-Transformer架构,它优化了特征流并增强了局部细节提取。我们在不同的公共数据集上做了大量的实验,这些实验表明我们的方法比目前可用的其他方法更好。
{"title":"SEAGNet: Spatial–Epipolar–Angular–Global feature learning for light field super-resolution","authors":"Xingzheng Wang,&nbsp;Haotian Zhang,&nbsp;Yuhang Lin,&nbsp;Yuanbo Huang,&nbsp;Jiahao Lin","doi":"10.1016/j.imavis.2025.105866","DOIUrl":"10.1016/j.imavis.2025.105866","url":null,"abstract":"<div><div>In light field (LF) image super-resolution (SR), comprehensive learning of LF information is crucial for accurately recovering image details. Because 4D LF structures are so complex, current methods usually use special convolutions and modules to separately extract different LF characteristics (like spatial, angular, and EPI features) before combining them. But these methods focus too much on local LF information and not enough on global 4D LF features. This makes it hard for them to get better. To overcome this issue, we suggest a straightforward yet effective Global Feature Extraction Module (GFEM). This module extracts the global information from the entire 4D light field. Our method does this by using all of these features together. We also introduce a tool called the Progressive Angular Feature Extractor (PAFE), which gradually expands the area that extracts features to make sure it can extract features at different angles. We also designed a Spatial Gated Feed-forward Network (SGFN) to replace the standard feed-forward network in Transformer. This has resulted in our new Intra-Transformer architecture, which optimizes feature flow and enhances local detail extraction. We did a lot of experiments on different public datasets, and these showed that our method is better than other methods that are currently available.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105866"},"PeriodicalIF":4.2,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ShadowMamba: State-space model with boundary-region selective scan for shadow removal ShadowMamba:具有边界区域选择性扫描的状态空间模型,用于阴影去除
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-12 DOI: 10.1016/j.imavis.2025.105872
Xiujin Zhu , Chee-Onn Chow , Joon Huang Chuah
Image shadow removal is a typical low-level vision task, as shadows introduce abrupt local brightness variations that degrade the performance of downstream tasks. Due to the quadratic complexity of Transformers, many existing methods adopt local attention to balance accuracy and efficiency. However, restricting attention to local windows prevents true long-range dependency modeling and limits shadow removal performance. Recently, Mamba has shown strong ability in vision tasks by achieving global modeling with linear complexity. Despite this advantage, existing scanning mechanisms in the Mamba architecture are not suitable for shadow removal because they ignore the semantic continuity within the same region. To address this, a boundary-region selective scanning mechanism is proposed that captures local details while enhancing continuity among semantically related pixels, effectively improving shadow removal performance. In addition, a shadow mask denoising preprocessing method is introduced to improve the accuracy of the scanning mechanism and further enhance the data quality. Based on this, this paper presents ShadowMamba, the first Mamba-based model for shadow removal. Experimental results show that the proposed method outperforms existing mainstream approaches on the AISTD, ISTD, SRD, and WSRD+ datasets, and demonstrates good generalization ability in cross-dataset testing on USR and SBU. Meanwhile, the model also has significant advantages in parameter efficiency and computational complexity. Code is available at: https://github.com/ZHUXIUJINChris/ShadowMamba.
图像阴影去除是一项典型的低层次视觉任务,因为阴影会引入突然的局部亮度变化,从而降低下游任务的性能。由于变压器的二次复杂度,现有的许多方法都是局部关注平衡精度和效率。然而,限制对本地窗口的关注妨碍了真正的远程依赖关系建模,并限制了阴影去除的性能。最近,曼巴在视觉任务中表现出了很强的能力,实现了线性复杂性的全局建模。尽管有这样的优势,Mamba架构中现有的扫描机制并不适合阴影去除,因为它们忽略了同一区域内的语义连续性。为了解决这个问题,提出了一种边界区域选择性扫描机制,在捕获局部细节的同时增强语义相关像素之间的连续性,有效提高阴影去除性能。此外,为了提高扫描机构的精度,进一步提高数据质量,还引入了一种阴影掩模去噪预处理方法。基于此,本文提出了ShadowMamba,这是第一个基于mamba的阴影去除模型。实验结果表明,该方法在AISTD、ISTD、SRD和WSRD+数据集上优于现有主流方法,并在USR和SBU上表现出良好的跨数据集测试泛化能力。同时,该模型在参数效率和计算复杂度方面也具有显著的优势。代码可从https://github.com/ZHUXIUJINChris/ShadowMamba获得。
{"title":"ShadowMamba: State-space model with boundary-region selective scan for shadow removal","authors":"Xiujin Zhu ,&nbsp;Chee-Onn Chow ,&nbsp;Joon Huang Chuah","doi":"10.1016/j.imavis.2025.105872","DOIUrl":"10.1016/j.imavis.2025.105872","url":null,"abstract":"<div><div>Image shadow removal is a typical low-level vision task, as shadows introduce abrupt local brightness variations that degrade the performance of downstream tasks. Due to the quadratic complexity of Transformers, many existing methods adopt local attention to balance accuracy and efficiency. However, restricting attention to local windows prevents true long-range dependency modeling and limits shadow removal performance. Recently, Mamba has shown strong ability in vision tasks by achieving global modeling with linear complexity. Despite this advantage, existing scanning mechanisms in the Mamba architecture are not suitable for shadow removal because they ignore the semantic continuity within the same region. To address this, a boundary-region selective scanning mechanism is proposed that captures local details while enhancing continuity among semantically related pixels, effectively improving shadow removal performance. In addition, a shadow mask denoising preprocessing method is introduced to improve the accuracy of the scanning mechanism and further enhance the data quality. Based on this, this paper presents ShadowMamba, the first Mamba-based model for shadow removal. Experimental results show that the proposed method outperforms existing mainstream approaches on the AISTD, ISTD, SRD, and WSRD+ datasets, and demonstrates good generalization ability in cross-dataset testing on USR and SBU. Meanwhile, the model also has significant advantages in parameter efficiency and computational complexity. Code is available at: <span><span>https://github.com/ZHUXIUJINChris/ShadowMamba</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105872"},"PeriodicalIF":4.2,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LoGA-Attack: Local geometry-aware adversarial attack on 3D point clouds LoGA-Attack:对3D点云的局部几何感知对抗性攻击
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.imavis.2025.105871
Jia Yuan , Jun Chen , Chongshou Li , Pedro Alonso , Xinke Li , Tianrui Li
Adversarial attacks on 3D point clouds are increasingly critical for safety-sensitive domains like autonomous driving. Most existing methods ignore local geometric structure, yielding perturbations that harm imperceptibility and geometric consistency. We introduce local geometry-aware adversarial attack (LoGA-Attack), a local geometry-aware approach that exploits topological and geometric cues to craft refined perturbations. A Neighborhood Centrality (NC) score partitions points into contour and flat points sets. Contour points receive gradient-based iterative updates to maximize attack strength, while flat points use an Optimal Neighborhood-based Attack (ONA) that projects gradients onto the most consistent local geometric direction. Experiments on ModelNet40 and ScanObjectNN show higher attack success with lower perceptual distortion, demonstrating superior performance and strong transferability. Our code is available at: https://github.com/yuanjiachn/LoGA-Attack.
对3D点云的对抗性攻击在自动驾驶等安全敏感领域变得越来越重要。大多数现有的方法忽略了局部几何结构,产生了损害不可感知性和几何一致性的扰动。我们介绍了局部几何感知对抗性攻击(LoGA-Attack),这是一种利用拓扑和几何线索来制作精细扰动的局部几何感知方法。邻域中心性(NC)评分将点划分为轮廓点集和平面点集。轮廓点接收基于梯度的迭代更新以最大化攻击强度,而平坦点使用基于最优邻域的攻击(ONA),将梯度投影到最一致的局部几何方向上。在ModelNet40和ScanObjectNN上的实验表明,攻击成功率高,感知失真小,性能优越,可移植性强。我们的代码可在:https://github.com/yuanjiachn/LoGA-Attack。
{"title":"LoGA-Attack: Local geometry-aware adversarial attack on 3D point clouds","authors":"Jia Yuan ,&nbsp;Jun Chen ,&nbsp;Chongshou Li ,&nbsp;Pedro Alonso ,&nbsp;Xinke Li ,&nbsp;Tianrui Li","doi":"10.1016/j.imavis.2025.105871","DOIUrl":"10.1016/j.imavis.2025.105871","url":null,"abstract":"<div><div>Adversarial attacks on 3D point clouds are increasingly critical for safety-sensitive domains like autonomous driving. Most existing methods ignore local geometric structure, yielding perturbations that harm imperceptibility and geometric consistency. We introduce local geometry-aware adversarial attack (LoGA-Attack), a local geometry-aware approach that exploits topological and geometric cues to craft refined perturbations. A Neighborhood Centrality (NC) score partitions points into contour and flat points sets. Contour points receive gradient-based iterative updates to maximize attack strength, while flat points use an Optimal Neighborhood-based Attack (ONA) that projects gradients onto the most consistent local geometric direction. Experiments on ModelNet40 and ScanObjectNN show higher attack success with lower perceptual distortion, demonstrating superior performance and strong transferability. Our code is available at: <span><span>https://github.com/yuanjiachn/LoGA-Attack</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105871"},"PeriodicalIF":4.2,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian landmarks tracking-based real-time splatting reconstruction model 基于高斯地标跟踪的飞溅实时重建模型
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1016/j.imavis.2025.105869
Donglin Zhu, Zhongli Wang, Xiaoyang Fan, Miao Chen, Jiuyu Chen
Real-time and high-quality scene reconstruction remains a critical challenge for robotics applications. 3D Gaussian Splatting (3DGS) demonstrates remarkable capabilities in scene rendering. However, its integration with SLAM systems confronts two critical limitations: (1) slow pose tracking caused by full-frame rendering multiple times, and (2) susceptibility to environmental variations such as illumination variations and motion blur. To alleviate these issues, this paper proposes gaussian landmarks-based real-time reconstruction framework — GLT-SLAM, which composes of a ray casting-driven tracking module, a multi-modal keyframe selector, and an incremental geometric–photometric mapping module. To avoid redundant rendering computations, the tracking module achieves efficient 3D-2D correspondence by encoding Gaussian landmark-emitted rays and fusing attention scores. Furthermore, to enhance the framework’s robustness against complex environmental conditions, the keyframe selector balances multiple influencing factors including image quality, tracking uncertainty, information entropy, and feature overlap ratios. Finally, to achieve a compact map representation, the mapping module adds only Gaussian primitives of points, lines, and planes, and performs global map optimization through joint photometric–geometric constraints. Experimental results on the Replica, TUM RGB-D, and BJTU datasets demonstrate that the proposed method achieves a real-time processing rate of over 30 Hz on a platform with an NVIDIA RTX 3090, demonstrating a 19% higher efficiency than the fastest Photo-SLAM method while significantly outperforming other baseline methods in both localization and mapping accuracy. The source code will be available on GitHub.1
实时和高质量的场景重建仍然是机器人应用的关键挑战。三维高斯溅射(3DGS)在场景渲染中表现出非凡的能力。然而,它与SLAM系统的集成面临两个关键限制:(1)多次绘制全帧导致的姿态跟踪缓慢;(2)易受光照变化和运动模糊等环境变化的影响。为了解决这些问题,本文提出了基于高斯地标的实时重建框架GLT-SLAM,该框架由光线投射驱动的跟踪模块、多模态关键帧选择器和增量几何光度映射模块组成。为了避免冗余的渲染计算,跟踪模块通过对高斯地标发射射线进行编码并融合注意分数来实现高效的3D-2D对应。此外,为了增强框架对复杂环境条件的鲁棒性,关键帧选择器平衡了多个影响因素,包括图像质量、跟踪不确定性、信息熵和特征重叠率。最后,为了实现紧凑的地图表示,映射模块仅添加点、线、面高斯原语,并通过光度-几何联合约束进行全局地图优化。在Replica, TUM RGB-D和BJTU数据集上的实验结果表明,该方法在NVIDIA RTX 3090平台上实现了超过30 Hz的实时处理速率,比最快的Photo-SLAM方法效率高出19%,同时在定位和制图精度方面显着优于其他基准方法。源代码可以在GitHub.1上获得
{"title":"Gaussian landmarks tracking-based real-time splatting reconstruction model","authors":"Donglin Zhu,&nbsp;Zhongli Wang,&nbsp;Xiaoyang Fan,&nbsp;Miao Chen,&nbsp;Jiuyu Chen","doi":"10.1016/j.imavis.2025.105869","DOIUrl":"10.1016/j.imavis.2025.105869","url":null,"abstract":"<div><div>Real-time and high-quality scene reconstruction remains a critical challenge for robotics applications. 3D Gaussian Splatting (3DGS) demonstrates remarkable capabilities in scene rendering. However, its integration with SLAM systems confronts two critical limitations: (1) slow pose tracking caused by full-frame rendering multiple times, and (2) susceptibility to environmental variations such as illumination variations and motion blur. To alleviate these issues, this paper proposes gaussian landmarks-based real-time reconstruction framework — GLT-SLAM, which composes of a ray casting-driven tracking module, a multi-modal keyframe selector, and an incremental geometric–photometric mapping module. To avoid redundant rendering computations, the tracking module achieves efficient 3D-2D correspondence by encoding Gaussian landmark-emitted rays and fusing attention scores. Furthermore, to enhance the framework’s robustness against complex environmental conditions, the keyframe selector balances multiple influencing factors including image quality, tracking uncertainty, information entropy, and feature overlap ratios. Finally, to achieve a compact map representation, the mapping module adds only Gaussian primitives of points, lines, and planes, and performs global map optimization through joint photometric–geometric constraints. Experimental results on the Replica, TUM RGB-D, and BJTU datasets demonstrate that the proposed method achieves a real-time processing rate of over 30 Hz on a platform with an NVIDIA RTX 3090, demonstrating a 19% higher efficiency than the fastest Photo-SLAM method while significantly outperforming other baseline methods in both localization and mapping accuracy. The source code will be available on GitHub.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105869"},"PeriodicalIF":4.2,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deformable registration framework for brain MR images based on a dual-channel fusion strategy using GMamba 一种基于双通道融合策略的脑磁共振图像形变配准框架
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-07 DOI: 10.1016/j.imavis.2025.105868
Liwei Deng , Songyu Chen , Xin Yang , Sijuan Huang , Jing Wang
Medical image registration has important applications in medical image analysis. Although deep learning-based registration methods are widely recognized, there is still performance improvement space for existing algorithms due to the complex physiological structure of brain images. In this paper, we aim to propose a deformable medical image registration method that is highly accurate and capable of handling complex physiological structures. Therefore, we propose DFMNet, a dual-channel fusion method based on GMamba, to achieve accurate brain MRI image registration. Compared with state-of-the-art networks like TransMorph, DFMNet has a dual-channel network structure with different fusion strategies. We propose the GMamba block to efficiently capture the remote dependencies in moving and fixed image features. Meanwhile, we propose a context extraction channel to enhance the texture structure of the image content. In addition, we designed a weighted fusion block to help the features of the two channels can be fused efficiently. Extensive experiments on three public brain datasets demonstrate the effectiveness of DFMNet. The experimental results demonstrate that DFMNet outperforms multiple current state-of-the-art deformable registration methods in structural registration of brain images.
医学图像配准在医学图像分析中有着重要的应用。尽管基于深度学习的配准方法得到了广泛的认可,但由于脑图像的生理结构复杂,现有的配准算法仍有性能提升的空间。在本文中,我们旨在提出一种高精度且能够处理复杂生理结构的可变形医学图像配准方法。为此,我们提出了一种基于GMamba的双通道融合方法DFMNet来实现脑MRI图像的精确配准。与TransMorph等最先进的网络相比,DFMNet采用双通道网络结构,融合策略不同。我们提出了GMamba块来有效地捕获移动和固定图像特征中的远程依赖关系。同时,我们提出了一种上下文提取通道来增强图像内容的纹理结构。此外,我们还设计了一个加权融合块,以帮助有效地融合两个通道的特征。在三个公共脑数据集上的大量实验证明了DFMNet的有效性。实验结果表明,DFMNet在脑图像结构配准方面优于当前多种可变形配准方法。
{"title":"A deformable registration framework for brain MR images based on a dual-channel fusion strategy using GMamba","authors":"Liwei Deng ,&nbsp;Songyu Chen ,&nbsp;Xin Yang ,&nbsp;Sijuan Huang ,&nbsp;Jing Wang","doi":"10.1016/j.imavis.2025.105868","DOIUrl":"10.1016/j.imavis.2025.105868","url":null,"abstract":"<div><div>Medical image registration has important applications in medical image analysis. Although deep learning-based registration methods are widely recognized, there is still performance improvement space for existing algorithms due to the complex physiological structure of brain images. In this paper, we aim to propose a deformable medical image registration method that is highly accurate and capable of handling complex physiological structures. Therefore, we propose DFMNet, a dual-channel fusion method based on GMamba, to achieve accurate brain MRI image registration. Compared with state-of-the-art networks like TransMorph, DFMNet has a dual-channel network structure with different fusion strategies. We propose the GMamba block to efficiently capture the remote dependencies in moving and fixed image features. Meanwhile, we propose a context extraction channel to enhance the texture structure of the image content. In addition, we designed a weighted fusion block to help the features of the two channels can be fused efficiently. Extensive experiments on three public brain datasets demonstrate the effectiveness of DFMNet. The experimental results demonstrate that DFMNet outperforms multiple current state-of-the-art deformable registration methods in structural registration of brain images.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105868"},"PeriodicalIF":4.2,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LCFusion: Infrared and visible image fusion network based on local contour enhancement LCFusion:基于局部轮廓增强的红外与可见光图像融合网络
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105856
Yitong Yang , Lei Zhu , Xinyang Yao , Hua Wang , Yang Pan , Bo Zhang
Infrared and visible light image fusion aims to generate integrated representations that synergistically preserve salient thermal targets in the infrared modality and high-resolution textural details in the visible light modality. However, existing methods face two core challenges: First, high-frequency noise in visible images, such as sensor noise and nonuniform illumination artifacts, is often highly coupled with effective textures. Traditional fusion paradigms readily amplify noise interference while enhancing details, leading to structural distortion and visual graininess in fusion results. Second, mainstream approaches predominantly rely on simple aggregation operations like feature stitching or linear weighting, lacking deep modeling of cross-modal semantic correlations. This prevents adaptive interaction and collaborative enhancement of complementary information between modalities, creating a significant trade-off between target saliency and detail preservation. To address these challenges, we propose a dual-branch fusion network based on local contour enhancement. Specifically, it distinguishes and enhances meaningful contour details in a learnable manner while suppressing meaningless noise, thereby purifying the detail information used for fusion at its source. Cross-attention weights are computed based on feature representations extracted from different modal branches, enabling a feature selection mechanism that facilitates dynamic cross-modal interaction between infrared and visible light information. We evaluate our method against 11 state-of-the-art deep learning-based fusion approaches across four benchmark datasets using both subjective assessments and objective metrics. The experimental results demonstrate superior performance on public datasets. Furthermore, YOLOv12-based detection tests reveal that our method achieves higher confidence scores and better overall detection performance compared to other fusion techniques.
红外和可见光图像融合的目的是生成集成的表示,在红外模态中协同保存显著的热目标,在可见光模态中协同保存高分辨率的纹理细节。然而,现有方法面临两个核心挑战:首先,可见光图像中的高频噪声,如传感器噪声和非均匀照明伪影,通常与有效纹理高度耦合。传统的融合范式容易在增强细节的同时放大噪声干扰,导致融合结果的结构失真和视觉颗粒化。其次,主流方法主要依赖于简单的聚合操作,如特征拼接或线性加权,缺乏跨模态语义相关性的深度建模。这阻碍了模式之间互补信息的自适应交互和协作增强,在目标显著性和细节保存之间产生了重大权衡。为了解决这些问题,我们提出了一种基于局部轮廓增强的双分支融合网络。具体而言,它以可学习的方式区分和增强有意义的轮廓细节,同时抑制无意义的噪声,从而从源头净化用于融合的细节信息。交叉关注权重基于从不同模态分支提取的特征表示计算,实现了一种特征选择机制,促进了红外和可见光信息之间的动态跨模态交互。我们使用主观评估和客观指标对四个基准数据集上11种最先进的基于深度学习的融合方法进行了评估。实验结果表明,该方法在公共数据集上具有优异的性能。此外,基于yolov12的检测测试表明,与其他融合技术相比,我们的方法获得了更高的置信度分数和更好的整体检测性能。
{"title":"LCFusion: Infrared and visible image fusion network based on local contour enhancement","authors":"Yitong Yang ,&nbsp;Lei Zhu ,&nbsp;Xinyang Yao ,&nbsp;Hua Wang ,&nbsp;Yang Pan ,&nbsp;Bo Zhang","doi":"10.1016/j.imavis.2025.105856","DOIUrl":"10.1016/j.imavis.2025.105856","url":null,"abstract":"<div><div>Infrared and visible light image fusion aims to generate integrated representations that synergistically preserve salient thermal targets in the infrared modality and high-resolution textural details in the visible light modality. However, existing methods face two core challenges: First, high-frequency noise in visible images, such as sensor noise and nonuniform illumination artifacts, is often highly coupled with effective textures. Traditional fusion paradigms readily amplify noise interference while enhancing details, leading to structural distortion and visual graininess in fusion results. Second, mainstream approaches predominantly rely on simple aggregation operations like feature stitching or linear weighting, lacking deep modeling of cross-modal semantic correlations. This prevents adaptive interaction and collaborative enhancement of complementary information between modalities, creating a significant trade-off between target saliency and detail preservation. To address these challenges, we propose a dual-branch fusion network based on local contour enhancement. Specifically, it distinguishes and enhances meaningful contour details in a learnable manner while suppressing meaningless noise, thereby purifying the detail information used for fusion at its source. Cross-attention weights are computed based on feature representations extracted from different modal branches, enabling a feature selection mechanism that facilitates dynamic cross-modal interaction between infrared and visible light information. We evaluate our method against 11 state-of-the-art deep learning-based fusion approaches across four benchmark datasets using both subjective assessments and objective metrics. The experimental results demonstrate superior performance on public datasets. Furthermore, YOLOv12-based detection tests reveal that our method achieves higher confidence scores and better overall detection performance compared to other fusion techniques.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105856"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WAM-Net: Wavelet-Based Adaptive Multi-scale Fusion Network for fine-grained action recognition WAM-Net:基于小波的自适应多尺度融合网络的细粒度动作识别
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105855
Jirui Di, Zhengping Hu, Hehao Zhang, Qiming Zhang, Zhe Sun
Fine-grained actions often lack scene prior information, making strong temporal modeling particularly important. Since these actions primarily rely on subtle and localized motion differences, single-scale features are often insufficient to capture their complexity. In contrast, multi-scale features not only capture fine-grained patterns but also contain rich rhythmic information, which is crucial for modeling temporal dependencies. However, existing methods for processing multi-scale features suffer from two major limitations: they often rely on naive downsampling operations for scale alignment, causing significant structural information loss, and they treat features from different layers equally, without fully exploiting the complementary strengths across hierarchical levels. To address these issues, we propose a novel Wavelet-Based Adaptive Multi-scale Fusion Network (WAM-Net), which consists of three key components: (1) a Wavelet-based Fusion Module (WFM) that achieves feature alignment through wavelet reconstruction, avoiding the structural degradation typically introduced by direct downsampling, (2) an Adaptive Feature Selection Module (AFSM) that dynamically selects and fuses two levels of features based on global information, enabling the network to leverage their complementary advantages, and (3) a Duration Context Encoder (DCE) that extracts temporal duration representations from the overall video length to guide global dependency modeling. Extensive experiments on Diving48, FineGym, and Kinetics-400 demonstrate that our approach consistently outperforms existing state-of-the-art methods.
细粒度的动作通常缺乏场景先验信息,这使得强大的时间建模变得尤为重要。由于这些动作主要依赖于细微和局部的运动差异,单尺度特征往往不足以捕捉它们的复杂性。相比之下,多尺度特征不仅捕获细粒度模式,而且包含丰富的节奏信息,这对于建模时间依赖性至关重要。然而,现有的多尺度特征处理方法存在两个主要的局限性:它们通常依赖于简单的降采样操作来进行尺度对齐,导致严重的结构信息丢失;它们平等地对待来自不同层的特征,而没有充分利用不同层次间的互补优势。为了解决这些问题,我们提出了一种新的基于小波的自适应多尺度融合网络(WAM-Net),它由三个关键部分组成:(1)基于小波的融合模块(WFM),通过小波重构实现特征对齐,避免了直接下采样通常带来的结构退化;(2)自适应特征选择模块(AFSM),基于全局信息动态选择和融合两级特征,使网络能够利用它们的互补优势;(3)持续时间上下文编码器(DCE),从整个视频长度中提取时间持续时间表示,以指导全局依赖关系建模。在Diving48、FineGym和Kinetics-400上进行的大量实验表明,我们的方法始终优于现有的最先进的方法。
{"title":"WAM-Net: Wavelet-Based Adaptive Multi-scale Fusion Network for fine-grained action recognition","authors":"Jirui Di,&nbsp;Zhengping Hu,&nbsp;Hehao Zhang,&nbsp;Qiming Zhang,&nbsp;Zhe Sun","doi":"10.1016/j.imavis.2025.105855","DOIUrl":"10.1016/j.imavis.2025.105855","url":null,"abstract":"<div><div>Fine-grained actions often lack scene prior information, making strong temporal modeling particularly important. Since these actions primarily rely on subtle and localized motion differences, single-scale features are often insufficient to capture their complexity. In contrast, multi-scale features not only capture fine-grained patterns but also contain rich rhythmic information, which is crucial for modeling temporal dependencies. However, existing methods for processing multi-scale features suffer from two major limitations: they often rely on naive downsampling operations for scale alignment, causing significant structural information loss, and they treat features from different layers equally, without fully exploiting the complementary strengths across hierarchical levels. To address these issues, we propose a novel Wavelet-Based Adaptive Multi-scale Fusion Network (WAM-Net), which consists of three key components: (1) a Wavelet-based Fusion Module (WFM) that achieves feature alignment through wavelet reconstruction, avoiding the structural degradation typically introduced by direct downsampling, (2) an Adaptive Feature Selection Module (AFSM) that dynamically selects and fuses two levels of features based on global information, enabling the network to leverage their complementary advantages, and (3) a Duration Context Encoder (DCE) that extracts temporal duration representations from the overall video length to guide global dependency modeling. Extensive experiments on Diving48, FineGym, and Kinetics-400 demonstrate that our approach consistently outperforms existing state-of-the-art methods.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105855"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing skin cancer classification with Soft Attention and genetic algorithm-optimized ensemble learning 基于软注意和遗传算法优化的集成学习增强皮肤癌分类
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105848
Vibhav Ranjan, Kuldeep Chaurasia, Jagendra Singh
Skin cancer detection is a critical task in dermatology, where early diagnosis can significantly improve patient outcomes. In this work, we propose a novel approach for skin cancer classification that combines three deep learning models—InceptionResNetV2 with Soft Attention (SA), ResNet50V2 with SA, and DenseNet201—optimized using a Genetic Algorithm (GA) to find the best ensemble weights. The approach integrates several key innovations: Sigmoid Focal Cross-entropy Loss to address class imbalance, Mish activation for improved gradient flow, and Cosine Annealing learning rate scheduling for enhanced convergence. The GA-based optimization fine-tunes the ensemble weights to maximize classification performance, especially for challenging skin cancer types like melanoma. Experimental results on the HAM10000 dataset demonstrate the effectiveness of the proposed ensemble model, achieving superior accuracy and precision compared to individual models. This work offers a robust framework for skin cancer detection, combining state-of-the-art deep learning techniques with an optimization strategy.
皮肤癌检测是皮肤科的一项关键任务,早期诊断可以显著改善患者的预后。在这项工作中,我们提出了一种新的皮肤癌分类方法,该方法结合了三个深度学习模型——带有软注意(SA)的inception resnetv2、带有SA的ResNet50V2和使用遗传算法(GA)优化的densenet201,以找到最佳的集合权重。该方法集成了几个关键的创新:Sigmoid焦点交叉熵损失(Sigmoid Focal Cross-entropy Loss)来解决类别不平衡问题,Mish激活来改善梯度流,余弦退火(Cosine退火)学习率调度来增强收敛性。基于遗传算法的优化对集合权重进行微调,以最大限度地提高分类性能,特别是对于黑色素瘤等具有挑战性的皮肤癌类型。在HAM10000数据集上的实验结果证明了该集成模型的有效性,与单个模型相比具有更高的准确度和精度。这项工作为皮肤癌检测提供了一个强大的框架,结合了最先进的深度学习技术和优化策略。
{"title":"Enhancing skin cancer classification with Soft Attention and genetic algorithm-optimized ensemble learning","authors":"Vibhav Ranjan,&nbsp;Kuldeep Chaurasia,&nbsp;Jagendra Singh","doi":"10.1016/j.imavis.2025.105848","DOIUrl":"10.1016/j.imavis.2025.105848","url":null,"abstract":"<div><div>Skin cancer detection is a critical task in dermatology, where early diagnosis can significantly improve patient outcomes. In this work, we propose a novel approach for skin cancer classification that combines three deep learning models—InceptionResNetV2 with Soft Attention (SA), ResNet50V2 with SA, and DenseNet201—optimized using a Genetic Algorithm (GA) to find the best ensemble weights. The approach integrates several key innovations: Sigmoid Focal Cross-entropy Loss to address class imbalance, Mish activation for improved gradient flow, and Cosine Annealing learning rate scheduling for enhanced convergence. The GA-based optimization fine-tunes the ensemble weights to maximize classification performance, especially for challenging skin cancer types like melanoma. Experimental results on the HAM10000 dataset demonstrate the effectiveness of the proposed ensemble model, achieving superior accuracy and precision compared to individual models. This work offers a robust framework for skin cancer detection, combining state-of-the-art deep learning techniques with an optimization strategy.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105848"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the relevance of patch-based extraction methods for monocular depth estimation 基于斑块的提取方法在单目深度估计中的相关性研究
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105857
Pasquale Coscia, Antonio Fusillo, Angelo Genovese, Vincenzo Piuri, Fabio Scotti
Scene geometry estimation from images plays a key role in robotics, augmented reality, and autonomous systems. In particular, Monocular Depth Estimation (MDE) focuses on predicting depth using a single RGB image, avoiding the need for expensive sensors. State-of-the-art approaches use deep learning models for MDE while processing images as a whole, sub-optimally exploiting their spatial information. A recent research direction focuses on smaller image patches, as depth information varies across different regions of an image. This approach reduces model complexity and improves performance by capturing finer spatial details. From this perspective, we propose a novel warp patch-based extraction method which corrects perspective camera distortions, and employ it in tailored training and inference pipelines. Our experimental results show that our patch-based approach outperforms its full-image-trained counterpart and the classical crop patch-based extraction. With our technique, we obtain a general performance enhancements over recent state-of-the-art models. Code is available at https://github.com/AntonioFusillo/PatchMDE.
基于图像的场景几何估计在机器人、增强现实和自主系统中起着关键作用。特别是,单目深度估计(MDE)侧重于使用单个RGB图像预测深度,避免了对昂贵传感器的需求。最先进的方法使用深度学习模型进行MDE,同时整体处理图像,次优地利用其空间信息。最近的研究方向集中在较小的图像补丁上,因为图像的不同区域的深度信息不同。这种方法通过捕获更精细的空间细节来降低模型复杂性并提高性能。从这个角度来看,我们提出了一种新的基于翘曲补丁的提取方法,该方法可以纠正透视相机的畸变,并将其应用于定制的训练和推理管道中。实验结果表明,基于斑块的方法优于基于全图像训练的方法和经典的基于作物斑块的提取方法。使用我们的技术,我们获得了比最新的最先进的模型的一般性能增强。代码可从https://github.com/AntonioFusillo/PatchMDE获得。
{"title":"On the relevance of patch-based extraction methods for monocular depth estimation","authors":"Pasquale Coscia,&nbsp;Antonio Fusillo,&nbsp;Angelo Genovese,&nbsp;Vincenzo Piuri,&nbsp;Fabio Scotti","doi":"10.1016/j.imavis.2025.105857","DOIUrl":"10.1016/j.imavis.2025.105857","url":null,"abstract":"<div><div>Scene geometry estimation from images plays a key role in robotics, augmented reality, and autonomous systems. In particular, Monocular Depth Estimation (MDE) focuses on predicting depth using a single RGB image, avoiding the need for expensive sensors. State-of-the-art approaches use deep learning models for MDE while processing images as a whole, sub-optimally exploiting their spatial information. A recent research direction focuses on smaller image patches, as depth information varies across different regions of an image. This approach reduces model complexity and improves performance by capturing finer spatial details. From this perspective, we propose a novel warp patch-based extraction method which corrects perspective camera distortions, and employ it in tailored training and inference pipelines. Our experimental results show that our patch-based approach outperforms its full-image-trained counterpart and the classical crop patch-based extraction. With our technique, we obtain a general performance enhancements over recent state-of-the-art models. Code is available at <span><span>https://github.com/AntonioFusillo/PatchMDE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105857"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust and secure video recovery scheme with deep compressive sensing 基于深度压缩感知的鲁棒安全视频恢复方案
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.imavis.2025.105853
Jagannath Sethi , Jaydeb Bhaumik , Ananda S. Chowdhury
In this paper, we propose a secure high quality video recovery scheme which can be useful for diverse applications like telemedicine and cloud-based surveillance. Our solution consists of deep learning-based video Compressive Sensing (CS) followed by a strategy for encrypting the compressed video. We split a video into a number of Groups Of Pictures (GOPs), where, each GOP consists of both keyframes and non-keyframes. The proposed video CS method uses a convolutional neural network (CNN) with a Structural Similarity Index Measure (SSIM) based loss function. Our recovery process has two stages. In the initial recovery stage, CNN is employed to make efficient use of spatial redundancy. In the deep recovery stage, non-keyframes are compensated by utilizing both keyframes and neighboring non-keyframes. Keyframes use multilevel feature compensation, and neighboring non-keyframes use single-level feature compensation. Additionally, we propose an unpredictable and complex chaotic map, with a broader chaotic range, termed as Sine Symbolic Chaotic Map (SSCM). For encrypting compressed features, we suggest a secure encryption scheme consisting of four operations: Forward Diffusion, Substitution, Backward Diffusion, and XORing with SSCM based chaotic sequence. Through extensive experimentation, we establish the efficacy of our combined solution over i) several state-of-the-art image and video CS methods, and ii) a number of video encryption techniques.
在本文中,我们提出了一种安全的高质量视频恢复方案,该方案可用于远程医疗和基于云的监控等多种应用。我们的解决方案包括基于深度学习的视频压缩感知(CS),然后是加密压缩视频的策略。我们将视频分成若干组图片(GOPs),其中每个GOP由关键帧和非关键帧组成。所提出的视频CS方法使用了卷积神经网络(CNN)和基于结构相似指数度量(SSIM)的损失函数。我们的恢复过程有两个阶段。在初始恢复阶段,采用CNN来有效利用空间冗余。在深度恢复阶段,利用关键帧和相邻的非关键帧对非关键帧进行补偿。关键帧使用多级特征补偿,相邻的非关键帧使用单级特征补偿。此外,我们提出了一种不可预测的复杂混沌映射,具有更广泛的混沌范围,称为正弦符号混沌映射(SSCM)。对于压缩特征的加密,我们提出了一种安全的加密方案,包括四种操作:前向扩散、替换、后向扩散和基于SSCM的混沌序列的XORing。通过广泛的实验,我们确定了我们的组合解决方案优于i)几种最先进的图像和视频CS方法,以及ii)许多视频加密技术的有效性。
{"title":"A robust and secure video recovery scheme with deep compressive sensing","authors":"Jagannath Sethi ,&nbsp;Jaydeb Bhaumik ,&nbsp;Ananda S. Chowdhury","doi":"10.1016/j.imavis.2025.105853","DOIUrl":"10.1016/j.imavis.2025.105853","url":null,"abstract":"<div><div>In this paper, we propose a secure high quality video recovery scheme which can be useful for diverse applications like telemedicine and cloud-based surveillance. Our solution consists of deep learning-based video Compressive Sensing (CS) followed by a strategy for encrypting the compressed video. We split a video into a number of Groups Of Pictures (GOPs), where, each GOP consists of both keyframes and non-keyframes. The proposed video CS method uses a convolutional neural network (CNN) with a Structural Similarity Index Measure (SSIM) based loss function. Our recovery process has two stages. In the initial recovery stage, CNN is employed to make efficient use of spatial redundancy. In the deep recovery stage, non-keyframes are compensated by utilizing both keyframes and neighboring non-keyframes. Keyframes use multilevel feature compensation, and neighboring non-keyframes use single-level feature compensation. Additionally, we propose an unpredictable and complex chaotic map, with a broader chaotic range, termed as Sine Symbolic Chaotic Map (SSCM). For encrypting compressed features, we suggest a secure encryption scheme consisting of four operations: Forward Diffusion, Substitution, Backward Diffusion, and XORing with SSCM based chaotic sequence. Through extensive experimentation, we establish the efficacy of our combined solution over i) several state-of-the-art image and video CS methods, and ii) a number of video encryption techniques.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105853"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Image and Vision Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1