首页 > 最新文献

Image and Vision Computing最新文献

英文 中文
A deformable registration framework for brain MR images based on a dual-channel fusion strategy using GMamba 一种基于双通道融合策略的脑磁共振图像形变配准框架
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-07 DOI: 10.1016/j.imavis.2025.105868
Liwei Deng , Songyu Chen , Xin Yang , Sijuan Huang , Jing Wang
Medical image registration has important applications in medical image analysis. Although deep learning-based registration methods are widely recognized, there is still performance improvement space for existing algorithms due to the complex physiological structure of brain images. In this paper, we aim to propose a deformable medical image registration method that is highly accurate and capable of handling complex physiological structures. Therefore, we propose DFMNet, a dual-channel fusion method based on GMamba, to achieve accurate brain MRI image registration. Compared with state-of-the-art networks like TransMorph, DFMNet has a dual-channel network structure with different fusion strategies. We propose the GMamba block to efficiently capture the remote dependencies in moving and fixed image features. Meanwhile, we propose a context extraction channel to enhance the texture structure of the image content. In addition, we designed a weighted fusion block to help the features of the two channels can be fused efficiently. Extensive experiments on three public brain datasets demonstrate the effectiveness of DFMNet. The experimental results demonstrate that DFMNet outperforms multiple current state-of-the-art deformable registration methods in structural registration of brain images.
医学图像配准在医学图像分析中有着重要的应用。尽管基于深度学习的配准方法得到了广泛的认可,但由于脑图像的生理结构复杂,现有的配准算法仍有性能提升的空间。在本文中,我们旨在提出一种高精度且能够处理复杂生理结构的可变形医学图像配准方法。为此,我们提出了一种基于GMamba的双通道融合方法DFMNet来实现脑MRI图像的精确配准。与TransMorph等最先进的网络相比,DFMNet采用双通道网络结构,融合策略不同。我们提出了GMamba块来有效地捕获移动和固定图像特征中的远程依赖关系。同时,我们提出了一种上下文提取通道来增强图像内容的纹理结构。此外,我们还设计了一个加权融合块,以帮助有效地融合两个通道的特征。在三个公共脑数据集上的大量实验证明了DFMNet的有效性。实验结果表明,DFMNet在脑图像结构配准方面优于当前多种可变形配准方法。
{"title":"A deformable registration framework for brain MR images based on a dual-channel fusion strategy using GMamba","authors":"Liwei Deng ,&nbsp;Songyu Chen ,&nbsp;Xin Yang ,&nbsp;Sijuan Huang ,&nbsp;Jing Wang","doi":"10.1016/j.imavis.2025.105868","DOIUrl":"10.1016/j.imavis.2025.105868","url":null,"abstract":"<div><div>Medical image registration has important applications in medical image analysis. Although deep learning-based registration methods are widely recognized, there is still performance improvement space for existing algorithms due to the complex physiological structure of brain images. In this paper, we aim to propose a deformable medical image registration method that is highly accurate and capable of handling complex physiological structures. Therefore, we propose DFMNet, a dual-channel fusion method based on GMamba, to achieve accurate brain MRI image registration. Compared with state-of-the-art networks like TransMorph, DFMNet has a dual-channel network structure with different fusion strategies. We propose the GMamba block to efficiently capture the remote dependencies in moving and fixed image features. Meanwhile, we propose a context extraction channel to enhance the texture structure of the image content. In addition, we designed a weighted fusion block to help the features of the two channels can be fused efficiently. Extensive experiments on three public brain datasets demonstrate the effectiveness of DFMNet. The experimental results demonstrate that DFMNet outperforms multiple current state-of-the-art deformable registration methods in structural registration of brain images.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105868"},"PeriodicalIF":4.2,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LCFusion: Infrared and visible image fusion network based on local contour enhancement LCFusion:基于局部轮廓增强的红外与可见光图像融合网络
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105856
Yitong Yang , Lei Zhu , Xinyang Yao , Hua Wang , Yang Pan , Bo Zhang
Infrared and visible light image fusion aims to generate integrated representations that synergistically preserve salient thermal targets in the infrared modality and high-resolution textural details in the visible light modality. However, existing methods face two core challenges: First, high-frequency noise in visible images, such as sensor noise and nonuniform illumination artifacts, is often highly coupled with effective textures. Traditional fusion paradigms readily amplify noise interference while enhancing details, leading to structural distortion and visual graininess in fusion results. Second, mainstream approaches predominantly rely on simple aggregation operations like feature stitching or linear weighting, lacking deep modeling of cross-modal semantic correlations. This prevents adaptive interaction and collaborative enhancement of complementary information between modalities, creating a significant trade-off between target saliency and detail preservation. To address these challenges, we propose a dual-branch fusion network based on local contour enhancement. Specifically, it distinguishes and enhances meaningful contour details in a learnable manner while suppressing meaningless noise, thereby purifying the detail information used for fusion at its source. Cross-attention weights are computed based on feature representations extracted from different modal branches, enabling a feature selection mechanism that facilitates dynamic cross-modal interaction between infrared and visible light information. We evaluate our method against 11 state-of-the-art deep learning-based fusion approaches across four benchmark datasets using both subjective assessments and objective metrics. The experimental results demonstrate superior performance on public datasets. Furthermore, YOLOv12-based detection tests reveal that our method achieves higher confidence scores and better overall detection performance compared to other fusion techniques.
红外和可见光图像融合的目的是生成集成的表示,在红外模态中协同保存显著的热目标,在可见光模态中协同保存高分辨率的纹理细节。然而,现有方法面临两个核心挑战:首先,可见光图像中的高频噪声,如传感器噪声和非均匀照明伪影,通常与有效纹理高度耦合。传统的融合范式容易在增强细节的同时放大噪声干扰,导致融合结果的结构失真和视觉颗粒化。其次,主流方法主要依赖于简单的聚合操作,如特征拼接或线性加权,缺乏跨模态语义相关性的深度建模。这阻碍了模式之间互补信息的自适应交互和协作增强,在目标显著性和细节保存之间产生了重大权衡。为了解决这些问题,我们提出了一种基于局部轮廓增强的双分支融合网络。具体而言,它以可学习的方式区分和增强有意义的轮廓细节,同时抑制无意义的噪声,从而从源头净化用于融合的细节信息。交叉关注权重基于从不同模态分支提取的特征表示计算,实现了一种特征选择机制,促进了红外和可见光信息之间的动态跨模态交互。我们使用主观评估和客观指标对四个基准数据集上11种最先进的基于深度学习的融合方法进行了评估。实验结果表明,该方法在公共数据集上具有优异的性能。此外,基于yolov12的检测测试表明,与其他融合技术相比,我们的方法获得了更高的置信度分数和更好的整体检测性能。
{"title":"LCFusion: Infrared and visible image fusion network based on local contour enhancement","authors":"Yitong Yang ,&nbsp;Lei Zhu ,&nbsp;Xinyang Yao ,&nbsp;Hua Wang ,&nbsp;Yang Pan ,&nbsp;Bo Zhang","doi":"10.1016/j.imavis.2025.105856","DOIUrl":"10.1016/j.imavis.2025.105856","url":null,"abstract":"<div><div>Infrared and visible light image fusion aims to generate integrated representations that synergistically preserve salient thermal targets in the infrared modality and high-resolution textural details in the visible light modality. However, existing methods face two core challenges: First, high-frequency noise in visible images, such as sensor noise and nonuniform illumination artifacts, is often highly coupled with effective textures. Traditional fusion paradigms readily amplify noise interference while enhancing details, leading to structural distortion and visual graininess in fusion results. Second, mainstream approaches predominantly rely on simple aggregation operations like feature stitching or linear weighting, lacking deep modeling of cross-modal semantic correlations. This prevents adaptive interaction and collaborative enhancement of complementary information between modalities, creating a significant trade-off between target saliency and detail preservation. To address these challenges, we propose a dual-branch fusion network based on local contour enhancement. Specifically, it distinguishes and enhances meaningful contour details in a learnable manner while suppressing meaningless noise, thereby purifying the detail information used for fusion at its source. Cross-attention weights are computed based on feature representations extracted from different modal branches, enabling a feature selection mechanism that facilitates dynamic cross-modal interaction between infrared and visible light information. We evaluate our method against 11 state-of-the-art deep learning-based fusion approaches across four benchmark datasets using both subjective assessments and objective metrics. The experimental results demonstrate superior performance on public datasets. Furthermore, YOLOv12-based detection tests reveal that our method achieves higher confidence scores and better overall detection performance compared to other fusion techniques.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105856"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WAM-Net: Wavelet-Based Adaptive Multi-scale Fusion Network for fine-grained action recognition WAM-Net:基于小波的自适应多尺度融合网络的细粒度动作识别
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105855
Jirui Di, Zhengping Hu, Hehao Zhang, Qiming Zhang, Zhe Sun
Fine-grained actions often lack scene prior information, making strong temporal modeling particularly important. Since these actions primarily rely on subtle and localized motion differences, single-scale features are often insufficient to capture their complexity. In contrast, multi-scale features not only capture fine-grained patterns but also contain rich rhythmic information, which is crucial for modeling temporal dependencies. However, existing methods for processing multi-scale features suffer from two major limitations: they often rely on naive downsampling operations for scale alignment, causing significant structural information loss, and they treat features from different layers equally, without fully exploiting the complementary strengths across hierarchical levels. To address these issues, we propose a novel Wavelet-Based Adaptive Multi-scale Fusion Network (WAM-Net), which consists of three key components: (1) a Wavelet-based Fusion Module (WFM) that achieves feature alignment through wavelet reconstruction, avoiding the structural degradation typically introduced by direct downsampling, (2) an Adaptive Feature Selection Module (AFSM) that dynamically selects and fuses two levels of features based on global information, enabling the network to leverage their complementary advantages, and (3) a Duration Context Encoder (DCE) that extracts temporal duration representations from the overall video length to guide global dependency modeling. Extensive experiments on Diving48, FineGym, and Kinetics-400 demonstrate that our approach consistently outperforms existing state-of-the-art methods.
细粒度的动作通常缺乏场景先验信息,这使得强大的时间建模变得尤为重要。由于这些动作主要依赖于细微和局部的运动差异,单尺度特征往往不足以捕捉它们的复杂性。相比之下,多尺度特征不仅捕获细粒度模式,而且包含丰富的节奏信息,这对于建模时间依赖性至关重要。然而,现有的多尺度特征处理方法存在两个主要的局限性:它们通常依赖于简单的降采样操作来进行尺度对齐,导致严重的结构信息丢失;它们平等地对待来自不同层的特征,而没有充分利用不同层次间的互补优势。为了解决这些问题,我们提出了一种新的基于小波的自适应多尺度融合网络(WAM-Net),它由三个关键部分组成:(1)基于小波的融合模块(WFM),通过小波重构实现特征对齐,避免了直接下采样通常带来的结构退化;(2)自适应特征选择模块(AFSM),基于全局信息动态选择和融合两级特征,使网络能够利用它们的互补优势;(3)持续时间上下文编码器(DCE),从整个视频长度中提取时间持续时间表示,以指导全局依赖关系建模。在Diving48、FineGym和Kinetics-400上进行的大量实验表明,我们的方法始终优于现有的最先进的方法。
{"title":"WAM-Net: Wavelet-Based Adaptive Multi-scale Fusion Network for fine-grained action recognition","authors":"Jirui Di,&nbsp;Zhengping Hu,&nbsp;Hehao Zhang,&nbsp;Qiming Zhang,&nbsp;Zhe Sun","doi":"10.1016/j.imavis.2025.105855","DOIUrl":"10.1016/j.imavis.2025.105855","url":null,"abstract":"<div><div>Fine-grained actions often lack scene prior information, making strong temporal modeling particularly important. Since these actions primarily rely on subtle and localized motion differences, single-scale features are often insufficient to capture their complexity. In contrast, multi-scale features not only capture fine-grained patterns but also contain rich rhythmic information, which is crucial for modeling temporal dependencies. However, existing methods for processing multi-scale features suffer from two major limitations: they often rely on naive downsampling operations for scale alignment, causing significant structural information loss, and they treat features from different layers equally, without fully exploiting the complementary strengths across hierarchical levels. To address these issues, we propose a novel Wavelet-Based Adaptive Multi-scale Fusion Network (WAM-Net), which consists of three key components: (1) a Wavelet-based Fusion Module (WFM) that achieves feature alignment through wavelet reconstruction, avoiding the structural degradation typically introduced by direct downsampling, (2) an Adaptive Feature Selection Module (AFSM) that dynamically selects and fuses two levels of features based on global information, enabling the network to leverage their complementary advantages, and (3) a Duration Context Encoder (DCE) that extracts temporal duration representations from the overall video length to guide global dependency modeling. Extensive experiments on Diving48, FineGym, and Kinetics-400 demonstrate that our approach consistently outperforms existing state-of-the-art methods.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105855"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing skin cancer classification with Soft Attention and genetic algorithm-optimized ensemble learning 基于软注意和遗传算法优化的集成学习增强皮肤癌分类
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105848
Vibhav Ranjan, Kuldeep Chaurasia, Jagendra Singh
Skin cancer detection is a critical task in dermatology, where early diagnosis can significantly improve patient outcomes. In this work, we propose a novel approach for skin cancer classification that combines three deep learning models—InceptionResNetV2 with Soft Attention (SA), ResNet50V2 with SA, and DenseNet201—optimized using a Genetic Algorithm (GA) to find the best ensemble weights. The approach integrates several key innovations: Sigmoid Focal Cross-entropy Loss to address class imbalance, Mish activation for improved gradient flow, and Cosine Annealing learning rate scheduling for enhanced convergence. The GA-based optimization fine-tunes the ensemble weights to maximize classification performance, especially for challenging skin cancer types like melanoma. Experimental results on the HAM10000 dataset demonstrate the effectiveness of the proposed ensemble model, achieving superior accuracy and precision compared to individual models. This work offers a robust framework for skin cancer detection, combining state-of-the-art deep learning techniques with an optimization strategy.
皮肤癌检测是皮肤科的一项关键任务,早期诊断可以显著改善患者的预后。在这项工作中,我们提出了一种新的皮肤癌分类方法,该方法结合了三个深度学习模型——带有软注意(SA)的inception resnetv2、带有SA的ResNet50V2和使用遗传算法(GA)优化的densenet201,以找到最佳的集合权重。该方法集成了几个关键的创新:Sigmoid焦点交叉熵损失(Sigmoid Focal Cross-entropy Loss)来解决类别不平衡问题,Mish激活来改善梯度流,余弦退火(Cosine退火)学习率调度来增强收敛性。基于遗传算法的优化对集合权重进行微调,以最大限度地提高分类性能,特别是对于黑色素瘤等具有挑战性的皮肤癌类型。在HAM10000数据集上的实验结果证明了该集成模型的有效性,与单个模型相比具有更高的准确度和精度。这项工作为皮肤癌检测提供了一个强大的框架,结合了最先进的深度学习技术和优化策略。
{"title":"Enhancing skin cancer classification with Soft Attention and genetic algorithm-optimized ensemble learning","authors":"Vibhav Ranjan,&nbsp;Kuldeep Chaurasia,&nbsp;Jagendra Singh","doi":"10.1016/j.imavis.2025.105848","DOIUrl":"10.1016/j.imavis.2025.105848","url":null,"abstract":"<div><div>Skin cancer detection is a critical task in dermatology, where early diagnosis can significantly improve patient outcomes. In this work, we propose a novel approach for skin cancer classification that combines three deep learning models—InceptionResNetV2 with Soft Attention (SA), ResNet50V2 with SA, and DenseNet201—optimized using a Genetic Algorithm (GA) to find the best ensemble weights. The approach integrates several key innovations: Sigmoid Focal Cross-entropy Loss to address class imbalance, Mish activation for improved gradient flow, and Cosine Annealing learning rate scheduling for enhanced convergence. The GA-based optimization fine-tunes the ensemble weights to maximize classification performance, especially for challenging skin cancer types like melanoma. Experimental results on the HAM10000 dataset demonstrate the effectiveness of the proposed ensemble model, achieving superior accuracy and precision compared to individual models. This work offers a robust framework for skin cancer detection, combining state-of-the-art deep learning techniques with an optimization strategy.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105848"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the relevance of patch-based extraction methods for monocular depth estimation 基于斑块的提取方法在单目深度估计中的相关性研究
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-03 DOI: 10.1016/j.imavis.2025.105857
Pasquale Coscia, Antonio Fusillo, Angelo Genovese, Vincenzo Piuri, Fabio Scotti
Scene geometry estimation from images plays a key role in robotics, augmented reality, and autonomous systems. In particular, Monocular Depth Estimation (MDE) focuses on predicting depth using a single RGB image, avoiding the need for expensive sensors. State-of-the-art approaches use deep learning models for MDE while processing images as a whole, sub-optimally exploiting their spatial information. A recent research direction focuses on smaller image patches, as depth information varies across different regions of an image. This approach reduces model complexity and improves performance by capturing finer spatial details. From this perspective, we propose a novel warp patch-based extraction method which corrects perspective camera distortions, and employ it in tailored training and inference pipelines. Our experimental results show that our patch-based approach outperforms its full-image-trained counterpart and the classical crop patch-based extraction. With our technique, we obtain a general performance enhancements over recent state-of-the-art models. Code is available at https://github.com/AntonioFusillo/PatchMDE.
基于图像的场景几何估计在机器人、增强现实和自主系统中起着关键作用。特别是,单目深度估计(MDE)侧重于使用单个RGB图像预测深度,避免了对昂贵传感器的需求。最先进的方法使用深度学习模型进行MDE,同时整体处理图像,次优地利用其空间信息。最近的研究方向集中在较小的图像补丁上,因为图像的不同区域的深度信息不同。这种方法通过捕获更精细的空间细节来降低模型复杂性并提高性能。从这个角度来看,我们提出了一种新的基于翘曲补丁的提取方法,该方法可以纠正透视相机的畸变,并将其应用于定制的训练和推理管道中。实验结果表明,基于斑块的方法优于基于全图像训练的方法和经典的基于作物斑块的提取方法。使用我们的技术,我们获得了比最新的最先进的模型的一般性能增强。代码可从https://github.com/AntonioFusillo/PatchMDE获得。
{"title":"On the relevance of patch-based extraction methods for monocular depth estimation","authors":"Pasquale Coscia,&nbsp;Antonio Fusillo,&nbsp;Angelo Genovese,&nbsp;Vincenzo Piuri,&nbsp;Fabio Scotti","doi":"10.1016/j.imavis.2025.105857","DOIUrl":"10.1016/j.imavis.2025.105857","url":null,"abstract":"<div><div>Scene geometry estimation from images plays a key role in robotics, augmented reality, and autonomous systems. In particular, Monocular Depth Estimation (MDE) focuses on predicting depth using a single RGB image, avoiding the need for expensive sensors. State-of-the-art approaches use deep learning models for MDE while processing images as a whole, sub-optimally exploiting their spatial information. A recent research direction focuses on smaller image patches, as depth information varies across different regions of an image. This approach reduces model complexity and improves performance by capturing finer spatial details. From this perspective, we propose a novel warp patch-based extraction method which corrects perspective camera distortions, and employ it in tailored training and inference pipelines. Our experimental results show that our patch-based approach outperforms its full-image-trained counterpart and the classical crop patch-based extraction. With our technique, we obtain a general performance enhancements over recent state-of-the-art models. Code is available at <span><span>https://github.com/AntonioFusillo/PatchMDE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105857"},"PeriodicalIF":4.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust and secure video recovery scheme with deep compressive sensing 基于深度压缩感知的鲁棒安全视频恢复方案
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.imavis.2025.105853
Jagannath Sethi , Jaydeb Bhaumik , Ananda S. Chowdhury
In this paper, we propose a secure high quality video recovery scheme which can be useful for diverse applications like telemedicine and cloud-based surveillance. Our solution consists of deep learning-based video Compressive Sensing (CS) followed by a strategy for encrypting the compressed video. We split a video into a number of Groups Of Pictures (GOPs), where, each GOP consists of both keyframes and non-keyframes. The proposed video CS method uses a convolutional neural network (CNN) with a Structural Similarity Index Measure (SSIM) based loss function. Our recovery process has two stages. In the initial recovery stage, CNN is employed to make efficient use of spatial redundancy. In the deep recovery stage, non-keyframes are compensated by utilizing both keyframes and neighboring non-keyframes. Keyframes use multilevel feature compensation, and neighboring non-keyframes use single-level feature compensation. Additionally, we propose an unpredictable and complex chaotic map, with a broader chaotic range, termed as Sine Symbolic Chaotic Map (SSCM). For encrypting compressed features, we suggest a secure encryption scheme consisting of four operations: Forward Diffusion, Substitution, Backward Diffusion, and XORing with SSCM based chaotic sequence. Through extensive experimentation, we establish the efficacy of our combined solution over i) several state-of-the-art image and video CS methods, and ii) a number of video encryption techniques.
在本文中,我们提出了一种安全的高质量视频恢复方案,该方案可用于远程医疗和基于云的监控等多种应用。我们的解决方案包括基于深度学习的视频压缩感知(CS),然后是加密压缩视频的策略。我们将视频分成若干组图片(GOPs),其中每个GOP由关键帧和非关键帧组成。所提出的视频CS方法使用了卷积神经网络(CNN)和基于结构相似指数度量(SSIM)的损失函数。我们的恢复过程有两个阶段。在初始恢复阶段,采用CNN来有效利用空间冗余。在深度恢复阶段,利用关键帧和相邻的非关键帧对非关键帧进行补偿。关键帧使用多级特征补偿,相邻的非关键帧使用单级特征补偿。此外,我们提出了一种不可预测的复杂混沌映射,具有更广泛的混沌范围,称为正弦符号混沌映射(SSCM)。对于压缩特征的加密,我们提出了一种安全的加密方案,包括四种操作:前向扩散、替换、后向扩散和基于SSCM的混沌序列的XORing。通过广泛的实验,我们确定了我们的组合解决方案优于i)几种最先进的图像和视频CS方法,以及ii)许多视频加密技术的有效性。
{"title":"A robust and secure video recovery scheme with deep compressive sensing","authors":"Jagannath Sethi ,&nbsp;Jaydeb Bhaumik ,&nbsp;Ananda S. Chowdhury","doi":"10.1016/j.imavis.2025.105853","DOIUrl":"10.1016/j.imavis.2025.105853","url":null,"abstract":"<div><div>In this paper, we propose a secure high quality video recovery scheme which can be useful for diverse applications like telemedicine and cloud-based surveillance. Our solution consists of deep learning-based video Compressive Sensing (CS) followed by a strategy for encrypting the compressed video. We split a video into a number of Groups Of Pictures (GOPs), where, each GOP consists of both keyframes and non-keyframes. The proposed video CS method uses a convolutional neural network (CNN) with a Structural Similarity Index Measure (SSIM) based loss function. Our recovery process has two stages. In the initial recovery stage, CNN is employed to make efficient use of spatial redundancy. In the deep recovery stage, non-keyframes are compensated by utilizing both keyframes and neighboring non-keyframes. Keyframes use multilevel feature compensation, and neighboring non-keyframes use single-level feature compensation. Additionally, we propose an unpredictable and complex chaotic map, with a broader chaotic range, termed as Sine Symbolic Chaotic Map (SSCM). For encrypting compressed features, we suggest a secure encryption scheme consisting of four operations: Forward Diffusion, Substitution, Backward Diffusion, and XORing with SSCM based chaotic sequence. Through extensive experimentation, we establish the efficacy of our combined solution over i) several state-of-the-art image and video CS methods, and ii) a number of video encryption techniques.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105853"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distinct Polyp Generator Network for polyp segmentation 用于息肉分割的独特息肉生成器网络
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-29 DOI: 10.1016/j.imavis.2025.105847
Huan Wan , Jing Ai , Jing Liu , Xin Wei , Jinshan Zeng , Jianyi Wan
Accurate polyp segmentation from the colonoscopy images is crucial for diagnosing and treating colorectal diseases. Although many automatic polyp segmentation models have been proposed and achieved good progress, they still suffer from under-segmentation or over-segmentation problems caused by the characteristics of colonoscopy images: blurred boundaries and widely varied polyp sizes. To address these problems, we propose a novel model, the Distinct Polyp Generator Network (DPG-Net), for polyp segmentation. In DPG-Net, a Feature Progressive Enhancement Module (FPEM) and a Dynamical Aggregation Module (DAM) are developed. The proposed FPEM is responsible for enhancing the polyps and polyp boundaries by jointly utilizing the boundary information and global prior information. Simultaneously, a DAM is developed to integrate all decoding features based on their own traits and detect polyps with various sizes. Finally, accurate segmentation results are obtained. Extensive experiments on five widely used datasets demonstrate that the proposed DPG-Net model is superior to the state-of-the-art models. To evaluate the cross-domain generalization ability, we adopt the proposed DPG-Net for the skin lesion segmentation task. Again, experimental results show that our DPG-Net achieves advanced performance in this task, which verifies the strong generalizability of DPG-Net.
从结肠镜图像中准确分割息肉是诊断和治疗结直肠疾病的关键。虽然已经提出了许多自动息肉分割模型并取得了良好的进展,但由于结肠镜图像边界模糊、息肉大小变化较大等特点,仍然存在分割不足或分割过度的问题。为了解决这些问题,我们提出了一个新的模型,独特的息肉生成器网络(DPG-Net),用于息肉分割。在DPG-Net中,开发了特征递进增强模块(FPEM)和动态聚合模块(DAM)。该算法利用边界信息和全局先验信息对息肉和息肉边界进行增强。同时,开发了一种基于自身特征整合所有解码特征的DAM,用于检测不同大小的息肉。最后得到准确的分割结果。在五个广泛使用的数据集上进行的大量实验表明,所提出的DPG-Net模型优于最先进的模型。为了评估该算法的跨域泛化能力,我们将提出的DPG-Net用于皮肤病变分割任务。实验结果再次表明,我们的DPG-Net在该任务中取得了先进的性能,验证了DPG-Net的强泛化性。
{"title":"Distinct Polyp Generator Network for polyp segmentation","authors":"Huan Wan ,&nbsp;Jing Ai ,&nbsp;Jing Liu ,&nbsp;Xin Wei ,&nbsp;Jinshan Zeng ,&nbsp;Jianyi Wan","doi":"10.1016/j.imavis.2025.105847","DOIUrl":"10.1016/j.imavis.2025.105847","url":null,"abstract":"<div><div>Accurate polyp segmentation from the colonoscopy images is crucial for diagnosing and treating colorectal diseases. Although many automatic polyp segmentation models have been proposed and achieved good progress, they still suffer from under-segmentation or over-segmentation problems caused by the characteristics of colonoscopy images: blurred boundaries and widely varied polyp sizes. To address these problems, we propose a novel model, the Distinct Polyp Generator Network (DPG-Net), for polyp segmentation. In DPG-Net, a Feature Progressive Enhancement Module (FPEM) and a Dynamical Aggregation Module (DAM) are developed. The proposed FPEM is responsible for enhancing the polyps and polyp boundaries by jointly utilizing the boundary information and global prior information. Simultaneously, a DAM is developed to integrate all decoding features based on their own traits and detect polyps with various sizes. Finally, accurate segmentation results are obtained. Extensive experiments on five widely used datasets demonstrate that the proposed DPG-Net model is superior to the state-of-the-art models. To evaluate the cross-domain generalization ability, we adopt the proposed DPG-Net for the skin lesion segmentation task. Again, experimental results show that our DPG-Net achieves advanced performance in this task, which verifies the strong generalizability of DPG-Net.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105847"},"PeriodicalIF":4.2,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCNet3D++: A pillar-based cascaded 3D object detection model with an enhanced 2D backbone pcnet3d++:一个基于柱的级联3D目标检测模型,具有增强的2D骨干
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-29 DOI: 10.1016/j.imavis.2025.105854
Thurimerla Prasanth , Ram Prasad Padhy , B. Sivaselvan
Autonomous Vehicles (AVs) depend on sophisticated perception systems to serve as the vital component of intelligent transportation to ensure secure and smooth navigation. Perception is an essential component of AVs and enables real-time analysis and understanding of the environment for effective decision-making. 3D object detection (3D-OD) is crucial among perception tasks as it accurately determines the 3D geometry and spatial positioning of surrounding objects. The commonly used modalities for 3D-OD are camera, LiDAR, and sensor fusion. In this work, we propose a LiDAR-based 3D-OD approach using point cloud data. The proposed model achieves superior performance while maintaining computational efficiency. This approach utilizes Pillar-based LiDAR processing and uses only 2D convolutions. The model pipeline becomes simple and more efficient by employing only 2D convolutions. We propose a Cascaded Convolutional Backbone (CCB) integrated with 1 × 1 convolutions to improve detection accuracy. We combined the fast Pillar-based encoding with our lightweight backbone. The proposed model reduces complexity to make it well-suited for real-time navigation of an AV. We evaluated our model on the official KITTI test server. The model results are decent in 3D and Bird’s Eye View (BEV) detection benchmarks for the car and cyclist classes. The results of our proposed model are featured on the official KITTI leaderboard.
自动驾驶汽车(AVs)依靠复杂的感知系统作为智能交通的重要组成部分,以确保安全顺畅的导航。感知是自动驾驶汽车的重要组成部分,能够实时分析和理解环境,从而做出有效的决策。3D物体检测(3D- od)在感知任务中至关重要,因为它可以准确地确定周围物体的三维几何形状和空间定位。3D-OD常用的模式是摄像头、激光雷达和传感器融合。在这项工作中,我们提出了一种基于激光雷达的3D-OD方法,使用点云数据。该模型在保持计算效率的同时,取得了较好的性能。这种方法利用基于柱的激光雷达处理,只使用二维卷积。通过只使用二维卷积,模型管道变得简单和高效。为了提高检测精度,我们提出了一种集成了1 × 1卷积的级联卷积主干(CCB)。我们将基于支柱的快速编码与轻量级主干结合起来。提出的模型降低了复杂性,使其非常适合自动驾驶汽车的实时导航。我们在官方KITTI测试服务器上评估了我们的模型。该模型在汽车和自行车类的3D和鸟瞰(BEV)检测基准中效果良好。我们提出的模型的结果在官方KITTI排行榜上有特色。
{"title":"PCNet3D++: A pillar-based cascaded 3D object detection model with an enhanced 2D backbone","authors":"Thurimerla Prasanth ,&nbsp;Ram Prasad Padhy ,&nbsp;B. Sivaselvan","doi":"10.1016/j.imavis.2025.105854","DOIUrl":"10.1016/j.imavis.2025.105854","url":null,"abstract":"<div><div>Autonomous Vehicles (AVs) depend on sophisticated perception systems to serve as the vital component of intelligent transportation to ensure secure and smooth navigation. Perception is an essential component of AVs and enables real-time analysis and understanding of the environment for effective decision-making. 3D object detection (3D-OD) is crucial among perception tasks as it accurately determines the 3D geometry and spatial positioning of surrounding objects. The commonly used modalities for 3D-OD are camera, LiDAR, and sensor fusion. In this work, we propose a LiDAR-based 3D-OD approach using point cloud data. The proposed model achieves superior performance while maintaining computational efficiency. This approach utilizes Pillar-based LiDAR processing and uses only 2D convolutions. The model pipeline becomes simple and more efficient by employing only 2D convolutions. We propose a Cascaded Convolutional Backbone (CCB) integrated with 1 × 1 convolutions to improve detection accuracy. We combined the fast Pillar-based encoding with our lightweight backbone. The proposed model reduces complexity to make it well-suited for real-time navigation of an AV. We evaluated our model on the official KITTI test server. The model results are decent in 3D and Bird’s Eye View (BEV) detection benchmarks for the car and cyclist classes. The results of our proposed model are featured on the official KITTI leaderboard.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105854"},"PeriodicalIF":4.2,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining short-term and long-term memory for robust visual tracking 结合短期和长期记忆,实现强大的视觉跟踪
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-29 DOI: 10.1016/j.imavis.2025.105850
Zifan Rui , Xiaoxiao Wang , Yiteng Yang , Guang Han
In visual object tracking, addressing challenges such as target appearance deformation and occlusion has attracted increasing attention. To this end, this paper proposes CSLMTrack, a multiple memory tracking model that more comprehensively reflects the human memory mechanism. It contains short-term and long-term memory modules, as well as a novel feed-forward network TFFN for temporal information aggregation. A dynamic memory update strategy including memory, information transfer, recall, and forgetting processes is also designed, which can effectively avoid memory explosion while integrating memory elements into the tracking network. Extensive experiments conducted on multiple challenging benchmarks demonstrate that CSLMTrack achieves impressive performance, reaching SOTA-level performance compared to state-of-the-art trackers.
在视觉目标跟踪中,如何解决目标外观变形和遮挡等问题越来越受到人们的关注。为此,本文提出了一种更全面反映人类记忆机制的多重记忆跟踪模型CSLMTrack。它包括短期和长期记忆模块,以及一种新颖的前馈网络TFFN,用于时间信息聚合。设计了包含记忆、信息传递、回忆和遗忘过程的动态记忆更新策略,在将记忆元素整合到跟踪网络的同时,有效避免了记忆爆炸。在多个具有挑战性的基准测试中进行的大量实验表明,CSLMTrack取得了令人印象深刻的性能,与最先进的跟踪器相比,达到了sota级别的性能。
{"title":"Combining short-term and long-term memory for robust visual tracking","authors":"Zifan Rui ,&nbsp;Xiaoxiao Wang ,&nbsp;Yiteng Yang ,&nbsp;Guang Han","doi":"10.1016/j.imavis.2025.105850","DOIUrl":"10.1016/j.imavis.2025.105850","url":null,"abstract":"<div><div>In visual object tracking, addressing challenges such as target appearance deformation and occlusion has attracted increasing attention. To this end, this paper proposes CSLMTrack, a multiple memory tracking model that more comprehensively reflects the human memory mechanism. It contains short-term and long-term memory modules, as well as a novel feed-forward network TFFN for temporal information aggregation. A dynamic memory update strategy including memory, information transfer, recall, and forgetting processes is also designed, which can effectively avoid memory explosion while integrating memory elements into the tracking network. Extensive experiments conducted on multiple challenging benchmarks demonstrate that CSLMTrack achieves impressive performance, reaching SOTA-level performance compared to state-of-the-art trackers.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105850"},"PeriodicalIF":4.2,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preserving instance-level characteristics for multi-instance generation 为多实例生成保留实例级特征
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-28 DOI: 10.1016/j.imavis.2025.105851
Jaehak Ryu , Sungwon Moon , Donghyeon Cho
Recently, there have been efforts to explore instance-level control in diffusion models, where multiple instances are generated independently and then integrated into a single scene. However, several issues arise when instances are closely positioned or overlapping. First, independently generated instances frequently differ in style and lack coherence, leading to changes in their attributes as they influence each other when merged. Second, instances often merge with one another or become absorbed into others. To tackle these challenges, we propose a local latent refinement (LLR) that enforces each local latent to meet its conditions and remain distinct from others. We also propose a local latent injection (LLI) method that gradually integrates local latents during global latent generation for smoother fusion. Also, we find that the variance of latents changes significantly after instance fusion, which greatly impacts the quality of the generated images. To remedy this, we apply an instance normalization layer to regulate the variance of the fused latents, thereby producing high-quality images. Extensive experiments demonstrate that our approach achieves both high fidelity in instance layout and superior image quality, even in cases of high overlap among instances.
最近,人们一直在努力探索扩散模型中的实例级控制,其中多个实例独立生成,然后集成到单个场景中。但是,当实例位置很近或重叠时,会出现几个问题。首先,独立生成的实例往往风格不同,缺乏连贯性,导致它们的属性发生变化,因为它们在合并时相互影响。第二,实例经常相互合并或被其他实例吸收。为了应对这些挑战,我们提出了一种局部潜在改进(LLR),强制每个局部潜在满足其条件并与其他潜在不同。我们还提出了一种局部潜注入(LLI)方法,该方法在全局潜生成过程中逐步整合局部潜,使融合更平滑。同时,我们发现在实例融合后,潜势的方差会发生很大的变化,这极大地影响了生成图像的质量。为了解决这个问题,我们应用实例归一化层来调节融合电位的方差,从而产生高质量的图像。大量的实验表明,即使在实例之间高度重叠的情况下,我们的方法也可以实现高保真的实例布局和优越的图像质量。
{"title":"Preserving instance-level characteristics for multi-instance generation","authors":"Jaehak Ryu ,&nbsp;Sungwon Moon ,&nbsp;Donghyeon Cho","doi":"10.1016/j.imavis.2025.105851","DOIUrl":"10.1016/j.imavis.2025.105851","url":null,"abstract":"<div><div>Recently, there have been efforts to explore instance-level control in diffusion models, where multiple instances are generated independently and then integrated into a single scene. However, several issues arise when instances are closely positioned or overlapping. First, independently generated instances frequently differ in style and lack coherence, leading to changes in their attributes as they influence each other when merged. Second, instances often merge with one another or become absorbed into others. To tackle these challenges, we propose a local latent refinement (LLR) that enforces each local latent to meet its conditions and remain distinct from others. We also propose a local latent injection (LLI) method that gradually integrates local latents during global latent generation for smoother fusion. Also, we find that the variance of latents changes significantly after instance fusion, which greatly impacts the quality of the generated images. To remedy this, we apply an instance normalization layer to regulate the variance of the fused latents, thereby producing high-quality images. Extensive experiments demonstrate that our approach achieves both high fidelity in instance layout and superior image quality, even in cases of high overlap among instances.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"166 ","pages":"Article 105851"},"PeriodicalIF":4.2,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Image and Vision Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1