首页 > 最新文献

Image and Vision Computing最新文献

英文 中文
LoGA-Attack: Local geometry-aware adversarial attack on 3D point clouds LoGA-Attack:对3D点云的局部几何感知对抗性攻击
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1016/j.imavis.2025.105871
Jia Yuan , Jun Chen , Chongshou Li , Pedro Alonso , Xinke Li , Tianrui Li
Adversarial attacks on 3D point clouds are increasingly critical for safety-sensitive domains like autonomous driving. Most existing methods ignore local geometric structure, yielding perturbations that harm imperceptibility and geometric consistency. We introduce local geometry-aware adversarial attack (LoGA-Attack), a local geometry-aware approach that exploits topological and geometric cues to craft refined perturbations. A Neighborhood Centrality (NC) score partitions points into contour and flat points sets. Contour points receive gradient-based iterative updates to maximize attack strength, while flat points use an Optimal Neighborhood-based Attack (ONA) that projects gradients onto the most consistent local geometric direction. Experiments on ModelNet40 and ScanObjectNN show higher attack success with lower perceptual distortion, demonstrating superior performance and strong transferability. Our code is available at: https://github.com/yuanjiachn/LoGA-Attack.
对3D点云的对抗性攻击在自动驾驶等安全敏感领域变得越来越重要。大多数现有的方法忽略了局部几何结构,产生了损害不可感知性和几何一致性的扰动。我们介绍了局部几何感知对抗性攻击(LoGA-Attack),这是一种利用拓扑和几何线索来制作精细扰动的局部几何感知方法。邻域中心性(NC)评分将点划分为轮廓点集和平面点集。轮廓点接收基于梯度的迭代更新以最大化攻击强度,而平坦点使用基于最优邻域的攻击(ONA),将梯度投影到最一致的局部几何方向上。在ModelNet40和ScanObjectNN上的实验表明,攻击成功率高,感知失真小,性能优越,可移植性强。我们的代码可在:https://github.com/yuanjiachn/LoGA-Attack。
{"title":"LoGA-Attack: Local geometry-aware adversarial attack on 3D point clouds","authors":"Jia Yuan ,&nbsp;Jun Chen ,&nbsp;Chongshou Li ,&nbsp;Pedro Alonso ,&nbsp;Xinke Li ,&nbsp;Tianrui Li","doi":"10.1016/j.imavis.2025.105871","DOIUrl":"10.1016/j.imavis.2025.105871","url":null,"abstract":"<div><div>Adversarial attacks on 3D point clouds are increasingly critical for safety-sensitive domains like autonomous driving. Most existing methods ignore local geometric structure, yielding perturbations that harm imperceptibility and geometric consistency. We introduce local geometry-aware adversarial attack (LoGA-Attack), a local geometry-aware approach that exploits topological and geometric cues to craft refined perturbations. A Neighborhood Centrality (NC) score partitions points into contour and flat points sets. Contour points receive gradient-based iterative updates to maximize attack strength, while flat points use an Optimal Neighborhood-based Attack (ONA) that projects gradients onto the most consistent local geometric direction. Experiments on ModelNet40 and ScanObjectNN show higher attack success with lower perceptual distortion, demonstrating superior performance and strong transferability. Our code is available at: <span><span>https://github.com/yuanjiachn/LoGA-Attack</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105871"},"PeriodicalIF":4.2,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian landmarks tracking-based real-time splatting reconstruction model 基于高斯地标跟踪的飞溅实时重建模型
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1016/j.imavis.2025.105869
Donglin Zhu, Zhongli Wang, Xiaoyang Fan, Miao Chen, Jiuyu Chen
Real-time and high-quality scene reconstruction remains a critical challenge for robotics applications. 3D Gaussian Splatting (3DGS) demonstrates remarkable capabilities in scene rendering. However, its integration with SLAM systems confronts two critical limitations: (1) slow pose tracking caused by full-frame rendering multiple times, and (2) susceptibility to environmental variations such as illumination variations and motion blur. To alleviate these issues, this paper proposes gaussian landmarks-based real-time reconstruction framework — GLT-SLAM, which composes of a ray casting-driven tracking module, a multi-modal keyframe selector, and an incremental geometric–photometric mapping module. To avoid redundant rendering computations, the tracking module achieves efficient 3D-2D correspondence by encoding Gaussian landmark-emitted rays and fusing attention scores. Furthermore, to enhance the framework’s robustness against complex environmental conditions, the keyframe selector balances multiple influencing factors including image quality, tracking uncertainty, information entropy, and feature overlap ratios. Finally, to achieve a compact map representation, the mapping module adds only Gaussian primitives of points, lines, and planes, and performs global map optimization through joint photometric–geometric constraints. Experimental results on the Replica, TUM RGB-D, and BJTU datasets demonstrate that the proposed method achieves a real-time processing rate of over 30 Hz on a platform with an NVIDIA RTX 3090, demonstrating a 19% higher efficiency than the fastest Photo-SLAM method while significantly outperforming other baseline methods in both localization and mapping accuracy. The source code will be available on GitHub.1
实时和高质量的场景重建仍然是机器人应用的关键挑战。三维高斯溅射(3DGS)在场景渲染中表现出非凡的能力。然而,它与SLAM系统的集成面临两个关键限制:(1)多次绘制全帧导致的姿态跟踪缓慢;(2)易受光照变化和运动模糊等环境变化的影响。为了解决这些问题,本文提出了基于高斯地标的实时重建框架GLT-SLAM,该框架由光线投射驱动的跟踪模块、多模态关键帧选择器和增量几何光度映射模块组成。为了避免冗余的渲染计算,跟踪模块通过对高斯地标发射射线进行编码并融合注意分数来实现高效的3D-2D对应。此外,为了增强框架对复杂环境条件的鲁棒性,关键帧选择器平衡了多个影响因素,包括图像质量、跟踪不确定性、信息熵和特征重叠率。最后,为了实现紧凑的地图表示,映射模块仅添加点、线、面高斯原语,并通过光度-几何联合约束进行全局地图优化。在Replica, TUM RGB-D和BJTU数据集上的实验结果表明,该方法在NVIDIA RTX 3090平台上实现了超过30 Hz的实时处理速率,比最快的Photo-SLAM方法效率高出19%,同时在定位和制图精度方面显着优于其他基准方法。源代码可以在GitHub.1上获得
{"title":"Gaussian landmarks tracking-based real-time splatting reconstruction model","authors":"Donglin Zhu,&nbsp;Zhongli Wang,&nbsp;Xiaoyang Fan,&nbsp;Miao Chen,&nbsp;Jiuyu Chen","doi":"10.1016/j.imavis.2025.105869","DOIUrl":"10.1016/j.imavis.2025.105869","url":null,"abstract":"<div><div>Real-time and high-quality scene reconstruction remains a critical challenge for robotics applications. 3D Gaussian Splatting (3DGS) demonstrates remarkable capabilities in scene rendering. However, its integration with SLAM systems confronts two critical limitations: (1) slow pose tracking caused by full-frame rendering multiple times, and (2) susceptibility to environmental variations such as illumination variations and motion blur. To alleviate these issues, this paper proposes gaussian landmarks-based real-time reconstruction framework — GLT-SLAM, which composes of a ray casting-driven tracking module, a multi-modal keyframe selector, and an incremental geometric–photometric mapping module. To avoid redundant rendering computations, the tracking module achieves efficient 3D-2D correspondence by encoding Gaussian landmark-emitted rays and fusing attention scores. Furthermore, to enhance the framework’s robustness against complex environmental conditions, the keyframe selector balances multiple influencing factors including image quality, tracking uncertainty, information entropy, and feature overlap ratios. Finally, to achieve a compact map representation, the mapping module adds only Gaussian primitives of points, lines, and planes, and performs global map optimization through joint photometric–geometric constraints. Experimental results on the Replica, TUM RGB-D, and BJTU datasets demonstrate that the proposed method achieves a real-time processing rate of over 30 Hz on a platform with an NVIDIA RTX 3090, demonstrating a 19% higher efficiency than the fastest Photo-SLAM method while significantly outperforming other baseline methods in both localization and mapping accuracy. The source code will be available on GitHub.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105869"},"PeriodicalIF":4.2,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deformable registration framework for brain MR images based on a dual-channel fusion strategy using GMamba 一种基于双通道融合策略的脑磁共振图像形变配准框架
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-07 DOI: 10.1016/j.imavis.2025.105868
Liwei Deng , Songyu Chen , Xin Yang , Sijuan Huang , Jing Wang
Medical image registration has important applications in medical image analysis. Although deep learning-based registration methods are widely recognized, there is still performance improvement space for existing algorithms due to the complex physiological structure of brain images. In this paper, we aim to propose a deformable medical image registration method that is highly accurate and capable of handling complex physiological structures. Therefore, we propose DFMNet, a dual-channel fusion method based on GMamba, to achieve accurate brain MRI image registration. Compared with state-of-the-art networks like TransMorph, DFMNet has a dual-channel network structure with different fusion strategies. We propose the GMamba block to efficiently capture the remote dependencies in moving and fixed image features. Meanwhile, we propose a context extraction channel to enhance the texture structure of the image content. In addition, we designed a weighted fusion block to help the features of the two channels can be fused efficiently. Extensive experiments on three public brain datasets demonstrate the effectiveness of DFMNet. The experimental results demonstrate that DFMNet outperforms multiple current state-of-the-art deformable registration methods in structural registration of brain images.
医学图像配准在医学图像分析中有着重要的应用。尽管基于深度学习的配准方法得到了广泛的认可,但由于脑图像的生理结构复杂,现有的配准算法仍有性能提升的空间。在本文中,我们旨在提出一种高精度且能够处理复杂生理结构的可变形医学图像配准方法。为此,我们提出了一种基于GMamba的双通道融合方法DFMNet来实现脑MRI图像的精确配准。与TransMorph等最先进的网络相比,DFMNet采用双通道网络结构,融合策略不同。我们提出了GMamba块来有效地捕获移动和固定图像特征中的远程依赖关系。同时,我们提出了一种上下文提取通道来增强图像内容的纹理结构。此外,我们还设计了一个加权融合块,以帮助有效地融合两个通道的特征。在三个公共脑数据集上的大量实验证明了DFMNet的有效性。实验结果表明,DFMNet在脑图像结构配准方面优于当前多种可变形配准方法。
{"title":"A deformable registration framework for brain MR images based on a dual-channel fusion strategy using GMamba","authors":"Liwei Deng ,&nbsp;Songyu Chen ,&nbsp;Xin Yang ,&nbsp;Sijuan Huang ,&nbsp;Jing Wang","doi":"10.1016/j.imavis.2025.105868","DOIUrl":"10.1016/j.imavis.2025.105868","url":null,"abstract":"<div><div>Medical image registration has important applications in medical image analysis. Although deep learning-based registration methods are widely recognized, there is still performance improvement space for existing algorithms due to the complex physiological structure of brain images. In this paper, we aim to propose a deformable medical image registration method that is highly accurate and capable of handling complex physiological structures. Therefore, we propose DFMNet, a dual-channel fusion method based on GMamba, to achieve accurate brain MRI image registration. Compared with state-of-the-art networks like TransMorph, DFMNet has a dual-channel network structure with different fusion strategies. We propose the GMamba block to efficiently capture the remote dependencies in moving and fixed image features. Meanwhile, we propose a context extraction channel to enhance the texture structure of the image content. In addition, we designed a weighted fusion block to help the features of the two channels can be fused efficiently. Extensive experiments on three public brain datasets demonstrate the effectiveness of DFMNet. The experimental results demonstrate that DFMNet outperforms multiple current state-of-the-art deformable registration methods in structural registration of brain images.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105868"},"PeriodicalIF":4.2,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LCFusion: Infrared and visible image fusion network based on local contour enhancement LCFusion:基于局部轮廓增强的红外与可见光图像融合网络
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105856
Yitong Yang , Lei Zhu , Xinyang Yao , Hua Wang , Yang Pan , Bo Zhang
Infrared and visible light image fusion aims to generate integrated representations that synergistically preserve salient thermal targets in the infrared modality and high-resolution textural details in the visible light modality. However, existing methods face two core challenges: First, high-frequency noise in visible images, such as sensor noise and nonuniform illumination artifacts, is often highly coupled with effective textures. Traditional fusion paradigms readily amplify noise interference while enhancing details, leading to structural distortion and visual graininess in fusion results. Second, mainstream approaches predominantly rely on simple aggregation operations like feature stitching or linear weighting, lacking deep modeling of cross-modal semantic correlations. This prevents adaptive interaction and collaborative enhancement of complementary information between modalities, creating a significant trade-off between target saliency and detail preservation. To address these challenges, we propose a dual-branch fusion network based on local contour enhancement. Specifically, it distinguishes and enhances meaningful contour details in a learnable manner while suppressing meaningless noise, thereby purifying the detail information used for fusion at its source. Cross-attention weights are computed based on feature representations extracted from different modal branches, enabling a feature selection mechanism that facilitates dynamic cross-modal interaction between infrared and visible light information. We evaluate our method against 11 state-of-the-art deep learning-based fusion approaches across four benchmark datasets using both subjective assessments and objective metrics. The experimental results demonstrate superior performance on public datasets. Furthermore, YOLOv12-based detection tests reveal that our method achieves higher confidence scores and better overall detection performance compared to other fusion techniques.
红外和可见光图像融合的目的是生成集成的表示,在红外模态中协同保存显著的热目标,在可见光模态中协同保存高分辨率的纹理细节。然而,现有方法面临两个核心挑战:首先,可见光图像中的高频噪声,如传感器噪声和非均匀照明伪影,通常与有效纹理高度耦合。传统的融合范式容易在增强细节的同时放大噪声干扰,导致融合结果的结构失真和视觉颗粒化。其次,主流方法主要依赖于简单的聚合操作,如特征拼接或线性加权,缺乏跨模态语义相关性的深度建模。这阻碍了模式之间互补信息的自适应交互和协作增强,在目标显著性和细节保存之间产生了重大权衡。为了解决这些问题,我们提出了一种基于局部轮廓增强的双分支融合网络。具体而言,它以可学习的方式区分和增强有意义的轮廓细节,同时抑制无意义的噪声,从而从源头净化用于融合的细节信息。交叉关注权重基于从不同模态分支提取的特征表示计算,实现了一种特征选择机制,促进了红外和可见光信息之间的动态跨模态交互。我们使用主观评估和客观指标对四个基准数据集上11种最先进的基于深度学习的融合方法进行了评估。实验结果表明,该方法在公共数据集上具有优异的性能。此外,基于yolov12的检测测试表明,与其他融合技术相比,我们的方法获得了更高的置信度分数和更好的整体检测性能。
{"title":"LCFusion: Infrared and visible image fusion network based on local contour enhancement","authors":"Yitong Yang ,&nbsp;Lei Zhu ,&nbsp;Xinyang Yao ,&nbsp;Hua Wang ,&nbsp;Yang Pan ,&nbsp;Bo Zhang","doi":"10.1016/j.imavis.2025.105856","DOIUrl":"10.1016/j.imavis.2025.105856","url":null,"abstract":"<div><div>Infrared and visible light image fusion aims to generate integrated representations that synergistically preserve salient thermal targets in the infrared modality and high-resolution textural details in the visible light modality. However, existing methods face two core challenges: First, high-frequency noise in visible images, such as sensor noise and nonuniform illumination artifacts, is often highly coupled with effective textures. Traditional fusion paradigms readily amplify noise interference while enhancing details, leading to structural distortion and visual graininess in fusion results. Second, mainstream approaches predominantly rely on simple aggregation operations like feature stitching or linear weighting, lacking deep modeling of cross-modal semantic correlations. This prevents adaptive interaction and collaborative enhancement of complementary information between modalities, creating a significant trade-off between target saliency and detail preservation. To address these challenges, we propose a dual-branch fusion network based on local contour enhancement. Specifically, it distinguishes and enhances meaningful contour details in a learnable manner while suppressing meaningless noise, thereby purifying the detail information used for fusion at its source. Cross-attention weights are computed based on feature representations extracted from different modal branches, enabling a feature selection mechanism that facilitates dynamic cross-modal interaction between infrared and visible light information. We evaluate our method against 11 state-of-the-art deep learning-based fusion approaches across four benchmark datasets using both subjective assessments and objective metrics. The experimental results demonstrate superior performance on public datasets. Furthermore, YOLOv12-based detection tests reveal that our method achieves higher confidence scores and better overall detection performance compared to other fusion techniques.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105856"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WAM-Net: Wavelet-Based Adaptive Multi-scale Fusion Network for fine-grained action recognition WAM-Net:基于小波的自适应多尺度融合网络的细粒度动作识别
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105855
Jirui Di, Zhengping Hu, Hehao Zhang, Qiming Zhang, Zhe Sun
Fine-grained actions often lack scene prior information, making strong temporal modeling particularly important. Since these actions primarily rely on subtle and localized motion differences, single-scale features are often insufficient to capture their complexity. In contrast, multi-scale features not only capture fine-grained patterns but also contain rich rhythmic information, which is crucial for modeling temporal dependencies. However, existing methods for processing multi-scale features suffer from two major limitations: they often rely on naive downsampling operations for scale alignment, causing significant structural information loss, and they treat features from different layers equally, without fully exploiting the complementary strengths across hierarchical levels. To address these issues, we propose a novel Wavelet-Based Adaptive Multi-scale Fusion Network (WAM-Net), which consists of three key components: (1) a Wavelet-based Fusion Module (WFM) that achieves feature alignment through wavelet reconstruction, avoiding the structural degradation typically introduced by direct downsampling, (2) an Adaptive Feature Selection Module (AFSM) that dynamically selects and fuses two levels of features based on global information, enabling the network to leverage their complementary advantages, and (3) a Duration Context Encoder (DCE) that extracts temporal duration representations from the overall video length to guide global dependency modeling. Extensive experiments on Diving48, FineGym, and Kinetics-400 demonstrate that our approach consistently outperforms existing state-of-the-art methods.
细粒度的动作通常缺乏场景先验信息,这使得强大的时间建模变得尤为重要。由于这些动作主要依赖于细微和局部的运动差异,单尺度特征往往不足以捕捉它们的复杂性。相比之下,多尺度特征不仅捕获细粒度模式,而且包含丰富的节奏信息,这对于建模时间依赖性至关重要。然而,现有的多尺度特征处理方法存在两个主要的局限性:它们通常依赖于简单的降采样操作来进行尺度对齐,导致严重的结构信息丢失;它们平等地对待来自不同层的特征,而没有充分利用不同层次间的互补优势。为了解决这些问题,我们提出了一种新的基于小波的自适应多尺度融合网络(WAM-Net),它由三个关键部分组成:(1)基于小波的融合模块(WFM),通过小波重构实现特征对齐,避免了直接下采样通常带来的结构退化;(2)自适应特征选择模块(AFSM),基于全局信息动态选择和融合两级特征,使网络能够利用它们的互补优势;(3)持续时间上下文编码器(DCE),从整个视频长度中提取时间持续时间表示,以指导全局依赖关系建模。在Diving48、FineGym和Kinetics-400上进行的大量实验表明,我们的方法始终优于现有的最先进的方法。
{"title":"WAM-Net: Wavelet-Based Adaptive Multi-scale Fusion Network for fine-grained action recognition","authors":"Jirui Di,&nbsp;Zhengping Hu,&nbsp;Hehao Zhang,&nbsp;Qiming Zhang,&nbsp;Zhe Sun","doi":"10.1016/j.imavis.2025.105855","DOIUrl":"10.1016/j.imavis.2025.105855","url":null,"abstract":"<div><div>Fine-grained actions often lack scene prior information, making strong temporal modeling particularly important. Since these actions primarily rely on subtle and localized motion differences, single-scale features are often insufficient to capture their complexity. In contrast, multi-scale features not only capture fine-grained patterns but also contain rich rhythmic information, which is crucial for modeling temporal dependencies. However, existing methods for processing multi-scale features suffer from two major limitations: they often rely on naive downsampling operations for scale alignment, causing significant structural information loss, and they treat features from different layers equally, without fully exploiting the complementary strengths across hierarchical levels. To address these issues, we propose a novel Wavelet-Based Adaptive Multi-scale Fusion Network (WAM-Net), which consists of three key components: (1) a Wavelet-based Fusion Module (WFM) that achieves feature alignment through wavelet reconstruction, avoiding the structural degradation typically introduced by direct downsampling, (2) an Adaptive Feature Selection Module (AFSM) that dynamically selects and fuses two levels of features based on global information, enabling the network to leverage their complementary advantages, and (3) a Duration Context Encoder (DCE) that extracts temporal duration representations from the overall video length to guide global dependency modeling. Extensive experiments on Diving48, FineGym, and Kinetics-400 demonstrate that our approach consistently outperforms existing state-of-the-art methods.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105855"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing skin cancer classification with Soft Attention and genetic algorithm-optimized ensemble learning 基于软注意和遗传算法优化的集成学习增强皮肤癌分类
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105848
Vibhav Ranjan, Kuldeep Chaurasia, Jagendra Singh
Skin cancer detection is a critical task in dermatology, where early diagnosis can significantly improve patient outcomes. In this work, we propose a novel approach for skin cancer classification that combines three deep learning models—InceptionResNetV2 with Soft Attention (SA), ResNet50V2 with SA, and DenseNet201—optimized using a Genetic Algorithm (GA) to find the best ensemble weights. The approach integrates several key innovations: Sigmoid Focal Cross-entropy Loss to address class imbalance, Mish activation for improved gradient flow, and Cosine Annealing learning rate scheduling for enhanced convergence. The GA-based optimization fine-tunes the ensemble weights to maximize classification performance, especially for challenging skin cancer types like melanoma. Experimental results on the HAM10000 dataset demonstrate the effectiveness of the proposed ensemble model, achieving superior accuracy and precision compared to individual models. This work offers a robust framework for skin cancer detection, combining state-of-the-art deep learning techniques with an optimization strategy.
皮肤癌检测是皮肤科的一项关键任务,早期诊断可以显著改善患者的预后。在这项工作中,我们提出了一种新的皮肤癌分类方法,该方法结合了三个深度学习模型——带有软注意(SA)的inception resnetv2、带有SA的ResNet50V2和使用遗传算法(GA)优化的densenet201,以找到最佳的集合权重。该方法集成了几个关键的创新:Sigmoid焦点交叉熵损失(Sigmoid Focal Cross-entropy Loss)来解决类别不平衡问题,Mish激活来改善梯度流,余弦退火(Cosine退火)学习率调度来增强收敛性。基于遗传算法的优化对集合权重进行微调,以最大限度地提高分类性能,特别是对于黑色素瘤等具有挑战性的皮肤癌类型。在HAM10000数据集上的实验结果证明了该集成模型的有效性,与单个模型相比具有更高的准确度和精度。这项工作为皮肤癌检测提供了一个强大的框架,结合了最先进的深度学习技术和优化策略。
{"title":"Enhancing skin cancer classification with Soft Attention and genetic algorithm-optimized ensemble learning","authors":"Vibhav Ranjan,&nbsp;Kuldeep Chaurasia,&nbsp;Jagendra Singh","doi":"10.1016/j.imavis.2025.105848","DOIUrl":"10.1016/j.imavis.2025.105848","url":null,"abstract":"<div><div>Skin cancer detection is a critical task in dermatology, where early diagnosis can significantly improve patient outcomes. In this work, we propose a novel approach for skin cancer classification that combines three deep learning models—InceptionResNetV2 with Soft Attention (SA), ResNet50V2 with SA, and DenseNet201—optimized using a Genetic Algorithm (GA) to find the best ensemble weights. The approach integrates several key innovations: Sigmoid Focal Cross-entropy Loss to address class imbalance, Mish activation for improved gradient flow, and Cosine Annealing learning rate scheduling for enhanced convergence. The GA-based optimization fine-tunes the ensemble weights to maximize classification performance, especially for challenging skin cancer types like melanoma. Experimental results on the HAM10000 dataset demonstrate the effectiveness of the proposed ensemble model, achieving superior accuracy and precision compared to individual models. This work offers a robust framework for skin cancer detection, combining state-of-the-art deep learning techniques with an optimization strategy.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105848"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the relevance of patch-based extraction methods for monocular depth estimation 基于斑块的提取方法在单目深度估计中的相关性研究
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105857
Pasquale Coscia, Antonio Fusillo, Angelo Genovese, Vincenzo Piuri, Fabio Scotti
Scene geometry estimation from images plays a key role in robotics, augmented reality, and autonomous systems. In particular, Monocular Depth Estimation (MDE) focuses on predicting depth using a single RGB image, avoiding the need for expensive sensors. State-of-the-art approaches use deep learning models for MDE while processing images as a whole, sub-optimally exploiting their spatial information. A recent research direction focuses on smaller image patches, as depth information varies across different regions of an image. This approach reduces model complexity and improves performance by capturing finer spatial details. From this perspective, we propose a novel warp patch-based extraction method which corrects perspective camera distortions, and employ it in tailored training and inference pipelines. Our experimental results show that our patch-based approach outperforms its full-image-trained counterpart and the classical crop patch-based extraction. With our technique, we obtain a general performance enhancements over recent state-of-the-art models. Code is available at https://github.com/AntonioFusillo/PatchMDE.
基于图像的场景几何估计在机器人、增强现实和自主系统中起着关键作用。特别是,单目深度估计(MDE)侧重于使用单个RGB图像预测深度,避免了对昂贵传感器的需求。最先进的方法使用深度学习模型进行MDE,同时整体处理图像,次优地利用其空间信息。最近的研究方向集中在较小的图像补丁上,因为图像的不同区域的深度信息不同。这种方法通过捕获更精细的空间细节来降低模型复杂性并提高性能。从这个角度来看,我们提出了一种新的基于翘曲补丁的提取方法,该方法可以纠正透视相机的畸变,并将其应用于定制的训练和推理管道中。实验结果表明,基于斑块的方法优于基于全图像训练的方法和经典的基于作物斑块的提取方法。使用我们的技术,我们获得了比最新的最先进的模型的一般性能增强。代码可从https://github.com/AntonioFusillo/PatchMDE获得。
{"title":"On the relevance of patch-based extraction methods for monocular depth estimation","authors":"Pasquale Coscia,&nbsp;Antonio Fusillo,&nbsp;Angelo Genovese,&nbsp;Vincenzo Piuri,&nbsp;Fabio Scotti","doi":"10.1016/j.imavis.2025.105857","DOIUrl":"10.1016/j.imavis.2025.105857","url":null,"abstract":"<div><div>Scene geometry estimation from images plays a key role in robotics, augmented reality, and autonomous systems. In particular, Monocular Depth Estimation (MDE) focuses on predicting depth using a single RGB image, avoiding the need for expensive sensors. State-of-the-art approaches use deep learning models for MDE while processing images as a whole, sub-optimally exploiting their spatial information. A recent research direction focuses on smaller image patches, as depth information varies across different regions of an image. This approach reduces model complexity and improves performance by capturing finer spatial details. From this perspective, we propose a novel warp patch-based extraction method which corrects perspective camera distortions, and employ it in tailored training and inference pipelines. Our experimental results show that our patch-based approach outperforms its full-image-trained counterpart and the classical crop patch-based extraction. With our technique, we obtain a general performance enhancements over recent state-of-the-art models. Code is available at <span><span>https://github.com/AntonioFusillo/PatchMDE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105857"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust and secure video recovery scheme with deep compressive sensing 基于深度压缩感知的鲁棒安全视频恢复方案
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.imavis.2025.105853
Jagannath Sethi , Jaydeb Bhaumik , Ananda S. Chowdhury
In this paper, we propose a secure high quality video recovery scheme which can be useful for diverse applications like telemedicine and cloud-based surveillance. Our solution consists of deep learning-based video Compressive Sensing (CS) followed by a strategy for encrypting the compressed video. We split a video into a number of Groups Of Pictures (GOPs), where, each GOP consists of both keyframes and non-keyframes. The proposed video CS method uses a convolutional neural network (CNN) with a Structural Similarity Index Measure (SSIM) based loss function. Our recovery process has two stages. In the initial recovery stage, CNN is employed to make efficient use of spatial redundancy. In the deep recovery stage, non-keyframes are compensated by utilizing both keyframes and neighboring non-keyframes. Keyframes use multilevel feature compensation, and neighboring non-keyframes use single-level feature compensation. Additionally, we propose an unpredictable and complex chaotic map, with a broader chaotic range, termed as Sine Symbolic Chaotic Map (SSCM). For encrypting compressed features, we suggest a secure encryption scheme consisting of four operations: Forward Diffusion, Substitution, Backward Diffusion, and XORing with SSCM based chaotic sequence. Through extensive experimentation, we establish the efficacy of our combined solution over i) several state-of-the-art image and video CS methods, and ii) a number of video encryption techniques.
在本文中,我们提出了一种安全的高质量视频恢复方案,该方案可用于远程医疗和基于云的监控等多种应用。我们的解决方案包括基于深度学习的视频压缩感知(CS),然后是加密压缩视频的策略。我们将视频分成若干组图片(GOPs),其中每个GOP由关键帧和非关键帧组成。所提出的视频CS方法使用了卷积神经网络(CNN)和基于结构相似指数度量(SSIM)的损失函数。我们的恢复过程有两个阶段。在初始恢复阶段,采用CNN来有效利用空间冗余。在深度恢复阶段,利用关键帧和相邻的非关键帧对非关键帧进行补偿。关键帧使用多级特征补偿,相邻的非关键帧使用单级特征补偿。此外,我们提出了一种不可预测的复杂混沌映射,具有更广泛的混沌范围,称为正弦符号混沌映射(SSCM)。对于压缩特征的加密,我们提出了一种安全的加密方案,包括四种操作:前向扩散、替换、后向扩散和基于SSCM的混沌序列的XORing。通过广泛的实验,我们确定了我们的组合解决方案优于i)几种最先进的图像和视频CS方法,以及ii)许多视频加密技术的有效性。
{"title":"A robust and secure video recovery scheme with deep compressive sensing","authors":"Jagannath Sethi ,&nbsp;Jaydeb Bhaumik ,&nbsp;Ananda S. Chowdhury","doi":"10.1016/j.imavis.2025.105853","DOIUrl":"10.1016/j.imavis.2025.105853","url":null,"abstract":"<div><div>In this paper, we propose a secure high quality video recovery scheme which can be useful for diverse applications like telemedicine and cloud-based surveillance. Our solution consists of deep learning-based video Compressive Sensing (CS) followed by a strategy for encrypting the compressed video. We split a video into a number of Groups Of Pictures (GOPs), where, each GOP consists of both keyframes and non-keyframes. The proposed video CS method uses a convolutional neural network (CNN) with a Structural Similarity Index Measure (SSIM) based loss function. Our recovery process has two stages. In the initial recovery stage, CNN is employed to make efficient use of spatial redundancy. In the deep recovery stage, non-keyframes are compensated by utilizing both keyframes and neighboring non-keyframes. Keyframes use multilevel feature compensation, and neighboring non-keyframes use single-level feature compensation. Additionally, we propose an unpredictable and complex chaotic map, with a broader chaotic range, termed as Sine Symbolic Chaotic Map (SSCM). For encrypting compressed features, we suggest a secure encryption scheme consisting of four operations: Forward Diffusion, Substitution, Backward Diffusion, and XORing with SSCM based chaotic sequence. Through extensive experimentation, we establish the efficacy of our combined solution over i) several state-of-the-art image and video CS methods, and ii) a number of video encryption techniques.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105853"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distinct Polyp Generator Network for polyp segmentation 用于息肉分割的独特息肉生成器网络
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-29 DOI: 10.1016/j.imavis.2025.105847
Huan Wan , Jing Ai , Jing Liu , Xin Wei , Jinshan Zeng , Jianyi Wan
Accurate polyp segmentation from the colonoscopy images is crucial for diagnosing and treating colorectal diseases. Although many automatic polyp segmentation models have been proposed and achieved good progress, they still suffer from under-segmentation or over-segmentation problems caused by the characteristics of colonoscopy images: blurred boundaries and widely varied polyp sizes. To address these problems, we propose a novel model, the Distinct Polyp Generator Network (DPG-Net), for polyp segmentation. In DPG-Net, a Feature Progressive Enhancement Module (FPEM) and a Dynamical Aggregation Module (DAM) are developed. The proposed FPEM is responsible for enhancing the polyps and polyp boundaries by jointly utilizing the boundary information and global prior information. Simultaneously, a DAM is developed to integrate all decoding features based on their own traits and detect polyps with various sizes. Finally, accurate segmentation results are obtained. Extensive experiments on five widely used datasets demonstrate that the proposed DPG-Net model is superior to the state-of-the-art models. To evaluate the cross-domain generalization ability, we adopt the proposed DPG-Net for the skin lesion segmentation task. Again, experimental results show that our DPG-Net achieves advanced performance in this task, which verifies the strong generalizability of DPG-Net.
从结肠镜图像中准确分割息肉是诊断和治疗结直肠疾病的关键。虽然已经提出了许多自动息肉分割模型并取得了良好的进展,但由于结肠镜图像边界模糊、息肉大小变化较大等特点,仍然存在分割不足或分割过度的问题。为了解决这些问题,我们提出了一个新的模型,独特的息肉生成器网络(DPG-Net),用于息肉分割。在DPG-Net中,开发了特征递进增强模块(FPEM)和动态聚合模块(DAM)。该算法利用边界信息和全局先验信息对息肉和息肉边界进行增强。同时,开发了一种基于自身特征整合所有解码特征的DAM,用于检测不同大小的息肉。最后得到准确的分割结果。在五个广泛使用的数据集上进行的大量实验表明,所提出的DPG-Net模型优于最先进的模型。为了评估该算法的跨域泛化能力,我们将提出的DPG-Net用于皮肤病变分割任务。实验结果再次表明,我们的DPG-Net在该任务中取得了先进的性能,验证了DPG-Net的强泛化性。
{"title":"Distinct Polyp Generator Network for polyp segmentation","authors":"Huan Wan ,&nbsp;Jing Ai ,&nbsp;Jing Liu ,&nbsp;Xin Wei ,&nbsp;Jinshan Zeng ,&nbsp;Jianyi Wan","doi":"10.1016/j.imavis.2025.105847","DOIUrl":"10.1016/j.imavis.2025.105847","url":null,"abstract":"<div><div>Accurate polyp segmentation from the colonoscopy images is crucial for diagnosing and treating colorectal diseases. Although many automatic polyp segmentation models have been proposed and achieved good progress, they still suffer from under-segmentation or over-segmentation problems caused by the characteristics of colonoscopy images: blurred boundaries and widely varied polyp sizes. To address these problems, we propose a novel model, the Distinct Polyp Generator Network (DPG-Net), for polyp segmentation. In DPG-Net, a Feature Progressive Enhancement Module (FPEM) and a Dynamical Aggregation Module (DAM) are developed. The proposed FPEM is responsible for enhancing the polyps and polyp boundaries by jointly utilizing the boundary information and global prior information. Simultaneously, a DAM is developed to integrate all decoding features based on their own traits and detect polyps with various sizes. Finally, accurate segmentation results are obtained. Extensive experiments on five widely used datasets demonstrate that the proposed DPG-Net model is superior to the state-of-the-art models. To evaluate the cross-domain generalization ability, we adopt the proposed DPG-Net for the skin lesion segmentation task. Again, experimental results show that our DPG-Net achieves advanced performance in this task, which verifies the strong generalizability of DPG-Net.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105847"},"PeriodicalIF":4.2,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCNet3D++: A pillar-based cascaded 3D object detection model with an enhanced 2D backbone pcnet3d++:一个基于柱的级联3D目标检测模型,具有增强的2D骨干
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-29 DOI: 10.1016/j.imavis.2025.105854
Thurimerla Prasanth , Ram Prasad Padhy , B. Sivaselvan
Autonomous Vehicles (AVs) depend on sophisticated perception systems to serve as the vital component of intelligent transportation to ensure secure and smooth navigation. Perception is an essential component of AVs and enables real-time analysis and understanding of the environment for effective decision-making. 3D object detection (3D-OD) is crucial among perception tasks as it accurately determines the 3D geometry and spatial positioning of surrounding objects. The commonly used modalities for 3D-OD are camera, LiDAR, and sensor fusion. In this work, we propose a LiDAR-based 3D-OD approach using point cloud data. The proposed model achieves superior performance while maintaining computational efficiency. This approach utilizes Pillar-based LiDAR processing and uses only 2D convolutions. The model pipeline becomes simple and more efficient by employing only 2D convolutions. We propose a Cascaded Convolutional Backbone (CCB) integrated with 1 × 1 convolutions to improve detection accuracy. We combined the fast Pillar-based encoding with our lightweight backbone. The proposed model reduces complexity to make it well-suited for real-time navigation of an AV. We evaluated our model on the official KITTI test server. The model results are decent in 3D and Bird’s Eye View (BEV) detection benchmarks for the car and cyclist classes. The results of our proposed model are featured on the official KITTI leaderboard.
自动驾驶汽车(AVs)依靠复杂的感知系统作为智能交通的重要组成部分,以确保安全顺畅的导航。感知是自动驾驶汽车的重要组成部分,能够实时分析和理解环境,从而做出有效的决策。3D物体检测(3D- od)在感知任务中至关重要,因为它可以准确地确定周围物体的三维几何形状和空间定位。3D-OD常用的模式是摄像头、激光雷达和传感器融合。在这项工作中,我们提出了一种基于激光雷达的3D-OD方法,使用点云数据。该模型在保持计算效率的同时,取得了较好的性能。这种方法利用基于柱的激光雷达处理,只使用二维卷积。通过只使用二维卷积,模型管道变得简单和高效。为了提高检测精度,我们提出了一种集成了1 × 1卷积的级联卷积主干(CCB)。我们将基于支柱的快速编码与轻量级主干结合起来。提出的模型降低了复杂性,使其非常适合自动驾驶汽车的实时导航。我们在官方KITTI测试服务器上评估了我们的模型。该模型在汽车和自行车类的3D和鸟瞰(BEV)检测基准中效果良好。我们提出的模型的结果在官方KITTI排行榜上有特色。
{"title":"PCNet3D++: A pillar-based cascaded 3D object detection model with an enhanced 2D backbone","authors":"Thurimerla Prasanth ,&nbsp;Ram Prasad Padhy ,&nbsp;B. Sivaselvan","doi":"10.1016/j.imavis.2025.105854","DOIUrl":"10.1016/j.imavis.2025.105854","url":null,"abstract":"<div><div>Autonomous Vehicles (AVs) depend on sophisticated perception systems to serve as the vital component of intelligent transportation to ensure secure and smooth navigation. Perception is an essential component of AVs and enables real-time analysis and understanding of the environment for effective decision-making. 3D object detection (3D-OD) is crucial among perception tasks as it accurately determines the 3D geometry and spatial positioning of surrounding objects. The commonly used modalities for 3D-OD are camera, LiDAR, and sensor fusion. In this work, we propose a LiDAR-based 3D-OD approach using point cloud data. The proposed model achieves superior performance while maintaining computational efficiency. This approach utilizes Pillar-based LiDAR processing and uses only 2D convolutions. The model pipeline becomes simple and more efficient by employing only 2D convolutions. We propose a Cascaded Convolutional Backbone (CCB) integrated with 1 × 1 convolutions to improve detection accuracy. We combined the fast Pillar-based encoding with our lightweight backbone. The proposed model reduces complexity to make it well-suited for real-time navigation of an AV. We evaluated our model on the official KITTI test server. The model results are decent in 3D and Bird’s Eye View (BEV) detection benchmarks for the car and cyclist classes. The results of our proposed model are featured on the official KITTI leaderboard.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105854"},"PeriodicalIF":4.2,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Image and Vision Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1