首页 > 最新文献

IET Computer Vision最新文献

英文 中文
Structure-Based Uncertainty Estimation for Source-Free Active Domain Adaptation 无源主动域自适应中基于结构的不确定性估计
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-16 DOI: 10.1049/cvi2.70020
Jihong Ouyang, Zhengjie Zhang, Qingyi Meng, Jinjin Chi

Active domain adaptation (active DA) provides an effective solution by selectively labelling a limited number of target samples to significantly enhance adaptation performance. However, existing active DA methods often struggle in real-world scenarios where, due to data privacy concerns, only a pre-trained source model is available, rather than the source samples. To address this issue, we propose a novel method called the structure-based uncertainty estimation model (SUEM) for source-free active domain adaptation (SFADA). To be specific, we introduce an innovative active sample selection strategy that combines both uncertainty and diversity sampling to identify the most informative samples. We assess the uncertainty in target samples using structure-wise probabilities and implement a diversity selection method to minimise redundancy. For the selected samples, we not only apply standard-supervised loss but also conduct interpolation consistency training to further explore the structural information of the target domain. Extensive experiments across four widely used datasets demonstrate that our method is comparable to or outperforms current UDA and active DA methods.

主动域自适应(Active domain adaptation, Active DA)是一种有效的解决方案,它可以选择性地标记有限数量的目标样本,从而显著提高自适应性能。然而,现有的主动数据分析方法在现实场景中经常遇到困难,由于数据隐私问题,只有预训练的源模型可用,而不是源样本。为了解决这一问题,我们提出了一种基于结构的不确定性估计模型(SUEM),用于无源主动域自适应(SFADA)。具体来说,我们引入了一种创新的主动样本选择策略,该策略结合了不确定性和多样性采样来识别最具信息量的样本。我们使用结构概率评估目标样本的不确定性,并实现多样性选择方法以最小化冗余。对于选择的样本,我们不仅应用标准监督损失,还进行插值一致性训练,进一步挖掘目标域的结构信息。在四个广泛使用的数据集上进行的大量实验表明,我们的方法与当前的UDA和主动DA方法相当或优于后者。
{"title":"Structure-Based Uncertainty Estimation for Source-Free Active Domain Adaptation","authors":"Jihong Ouyang,&nbsp;Zhengjie Zhang,&nbsp;Qingyi Meng,&nbsp;Jinjin Chi","doi":"10.1049/cvi2.70020","DOIUrl":"10.1049/cvi2.70020","url":null,"abstract":"<p>Active domain adaptation (active DA) provides an effective solution by selectively labelling a limited number of target samples to significantly enhance adaptation performance. However, existing active DA methods often struggle in real-world scenarios where, due to data privacy concerns, only a pre-trained source model is available, rather than the source samples. To address this issue, we propose a novel method called the structure-based uncertainty estimation model (SUEM) for source-free active domain adaptation (SFADA). To be specific, we introduce an innovative active sample selection strategy that combines both uncertainty and diversity sampling to identify the most informative samples. We assess the uncertainty in target samples using structure-wise probabilities and implement a diversity selection method to minimise redundancy. For the selected samples, we not only apply standard-supervised loss but also conduct interpolation consistency training to further explore the structural information of the target domain. Extensive experiments across four widely used datasets demonstrate that our method is comparable to or outperforms current UDA and active DA methods.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70020","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143840855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synchronised and Fine-Grained Head for Skeleton-Based Ambiguous Action Recognition 基于骨架的模糊动作识别的同步和细粒度头部
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-15 DOI: 10.1049/cvi2.70016
Hao Huang, Yujie Lin, Siyu Chen, Haiyang Liu

Skeleton-based action recognition using Graph Convolutional Networks (GCNs) has achieved remarkable performance, but recognising ambiguous actions, such as ‘waving’ and ‘saluting’, remains a significant challenge. Existing methods typically rely on a serial combination of GCNs and Temporal Convolutional Networks (TCNs), where spatial and temporal features are extracted independently, leading to an unbalanced spatial-temporal information, which hinders accurate action recognition. Moreover, existing methods for ambiguous actions often overemphasise local details, resulting in the loss of crucial global context, which further complicates the task of differentiating ambiguous actions. To address these challenges, the authors propose a lightweight plug-and-play module called Synchronised and Fine-grained Head (SF-Head), inserted between GCN and TCN layers. SF-Head first conducts Synchronised Spatial-Temporal Extraction (SSTE) with a Feature Redundancy Loss (F-RL), ensuring a balanced interaction between the two types of features. It then performs Adaptive Cross-dimensional Feature Aggregation (AC-FA), with a Feature Consistency Loss (F-CL), which aligns the aggregated feature with their original spatial-temporal feature. This aggregation step effectively combines both global context and local details, enhancing the model's ability to classify ambiguous actions. Experimental results on NTU RGB + D 60, NTU RGB + D 120, NW-UCLA and PKU-MMD I datasets demonstrate significant improvements in distinguishing ambiguous actions. Our code will be made available at https://github.com/HaoHuang2003/SFHead.

使用图卷积网络(GCNs)的基于骨架的动作识别已经取得了显著的成绩,但识别模棱两可的动作,如“挥手”和“敬礼”,仍然是一个重大挑战。现有方法通常依赖于GCNs和时间卷积网络(Temporal Convolutional Networks, TCNs)的串行组合,其中空间和时间特征是独立提取的,导致时空信息不平衡,阻碍了准确的动作识别。此外,现有的模糊动作方法往往过分强调局部细节,导致失去关键的全局上下文,这进一步复杂化了区分模糊动作的任务。为了应对这些挑战,作者提出了一种轻量级即插即用模块,称为同步和细粒度头(SF-Head),插入GCN和TCN层之间。SF-Head首先使用特征冗余损失(F-RL)进行同步时空提取(SSTE),确保两种类型特征之间的平衡交互。然后,它执行自适应跨维特征聚合(AC-FA),并使用特征一致性损失(F-CL)将聚合的特征与原始时空特征对齐。这个聚合步骤有效地结合了全局上下文和局部细节,增强了模型对模糊行为进行分类的能力。在NTU RGB + d60、NTU RGB + d120、NW-UCLA和PKU-MMD I数据集上的实验结果表明,在识别模糊动作方面有显著提高。我们的代码将在https://github.com/HaoHuang2003/SFHead上提供。
{"title":"Synchronised and Fine-Grained Head for Skeleton-Based Ambiguous Action Recognition","authors":"Hao Huang,&nbsp;Yujie Lin,&nbsp;Siyu Chen,&nbsp;Haiyang Liu","doi":"10.1049/cvi2.70016","DOIUrl":"10.1049/cvi2.70016","url":null,"abstract":"<p>Skeleton-based action recognition using Graph Convolutional Networks (GCNs) has achieved remarkable performance, but recognising ambiguous actions, such as ‘waving’ and ‘saluting’, remains a significant challenge. Existing methods typically rely on a serial combination of GCNs and Temporal Convolutional Networks (TCNs), where spatial and temporal features are extracted independently, leading to an unbalanced spatial-temporal information, which hinders accurate action recognition. Moreover, existing methods for ambiguous actions often overemphasise local details, resulting in the loss of crucial global context, which further complicates the task of differentiating ambiguous actions. To address these challenges, the authors propose a lightweight plug-and-play module called Synchronised and Fine-grained Head (SF-Head), inserted between GCN and TCN layers. SF-Head first conducts Synchronised Spatial-Temporal Extraction (SSTE) with a Feature Redundancy Loss (F-RL), ensuring a balanced interaction between the two types of features. It then performs Adaptive Cross-dimensional Feature Aggregation (AC-FA), with a Feature Consistency Loss (F-CL), which aligns the aggregated feature with their original spatial-temporal feature. This aggregation step effectively combines both global context and local details, enhancing the model's ability to classify ambiguous actions. Experimental results on NTU RGB + D 60, NTU RGB + D 120, NW-UCLA and PKU-MMD I datasets demonstrate significant improvements in distinguishing ambiguous actions. Our code will be made available at https://github.com/HaoHuang2003/SFHead.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EDG-CDM: A New Encoder-Guided Conditional Diffusion Model-Based Image Synthesis Method for Limited Data EDG-CDM:一种新的基于编码器引导的条件扩散模型的有限数据图像合成方法
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-08 DOI: 10.1049/cvi2.70018
Haopeng Lei, Hao Yin, Kaijun Liang, Mingwen Wang, Jinshan Zeng, Guoliang Luo

The Diffusion Probabilistic Model (DM) has emerged as a powerful generative model in the field of image synthesis, capable of producing high-quality and realistic images. However, training DM requires a large and diverse dataset, which can be challenging to obtain. This limitation weakens the model's generalisation and robustness when training data is limited. To address this issue, EDG-CDM, an innovative encoder-guided conditional diffusion model was proposed for image synthesis with limited data. Firstly, the authors pre-train the encoder by introducing noise to capture the distribution of image features and generate the condition vector through contrastive learning and KL divergence. Next, the encoder undergoes further training with classification to integrate image class information, providing more favourable and versatile conditions for the diffusion model. Subsequently, the encoder is connected to the diffusion model, which is trained using all available data with encoder-provided conditions. Finally, the authors evaluate EDG-CDM on various public datasets with limited data, conducting extensive experiments and comparing our results with state-of-the-art methods using metrics such as Fréchet Inception Distance and Inception Score. Our experiments demonstrate that EDG-CDM outperforms existing models by consistently achieving the lowest FID scores and the highest IS scores, highlighting its effectiveness in generating high-quality and diverse images with limited training data. These results underscore the significance of EDG-CDM in advancing image synthesis techniques under data-constrained scenarios.

扩散概率模型(Diffusion Probabilistic Model, DM)是图像合成领域中一种强大的生成模型,能够生成高质量、逼真的图像。然而,训练DM需要一个大而多样的数据集,这可能是具有挑战性的。当训练数据有限时,这种限制削弱了模型的泛化和鲁棒性。为了解决这一问题,提出了一种创新的编码器引导条件扩散模型EDG-CDM,用于有限数据的图像合成。首先,通过引入噪声对编码器进行预训练,捕捉图像特征的分布,并通过对比学习和KL散度生成条件向量;接下来,对编码器进行进一步的分类训练,整合图像类信息,为扩散模型提供更有利和通用的条件。随后,编码器连接到扩散模型,扩散模型使用编码器提供的条件下的所有可用数据进行训练。最后,作者在各种有限数据的公共数据集上评估了EDG-CDM,进行了广泛的实验,并将我们的结果与使用fr盗梦距离和盗梦分数等指标的最先进方法进行了比较。我们的实验表明,EDG-CDM优于现有模型,始终如一地获得最低的FID分数和最高的IS分数,突出了其在有限的训练数据下生成高质量和多样化图像的有效性。这些结果强调了EDG-CDM在数据受限情况下推进图像合成技术的重要性。
{"title":"EDG-CDM: A New Encoder-Guided Conditional Diffusion Model-Based Image Synthesis Method for Limited Data","authors":"Haopeng Lei,&nbsp;Hao Yin,&nbsp;Kaijun Liang,&nbsp;Mingwen Wang,&nbsp;Jinshan Zeng,&nbsp;Guoliang Luo","doi":"10.1049/cvi2.70018","DOIUrl":"10.1049/cvi2.70018","url":null,"abstract":"<p>The Diffusion Probabilistic Model (DM) has emerged as a powerful generative model in the field of image synthesis, capable of producing high-quality and realistic images. However, training DM requires a large and diverse dataset, which can be challenging to obtain. This limitation weakens the model's generalisation and robustness when training data is limited. To address this issue, EDG-CDM, an innovative encoder-guided conditional diffusion model was proposed for image synthesis with limited data. Firstly, the authors pre-train the encoder by introducing noise to capture the distribution of image features and generate the condition vector through contrastive learning and KL divergence. Next, the encoder undergoes further training with classification to integrate image class information, providing more favourable and versatile conditions for the diffusion model. Subsequently, the encoder is connected to the diffusion model, which is trained using all available data with encoder-provided conditions. Finally, the authors evaluate EDG-CDM on various public datasets with limited data, conducting extensive experiments and comparing our results with state-of-the-art methods using metrics such as Fréchet Inception Distance and Inception Score. Our experiments demonstrate that EDG-CDM outperforms existing models by consistently achieving the lowest FID scores and the highest IS scores, highlighting its effectiveness in generating high-quality and diverse images with limited training data. These results underscore the significance of EDG-CDM in advancing image synthesis techniques under data-constrained scenarios.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Computer Vision Algorithms for Fine-Grained Classification Using Crowdsourced Insect Images 利用众包昆虫图像进行细粒度分类的计算机视觉算法的性能
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-04 DOI: 10.1049/cvi2.70006
Rita Pucci, Vincent J. Kalkman, Dan Stowell

With fine-grained classification, we identify unique characteristics to distinguish among classes of the same super-class. We are focusing on species recognition in Insecta as they are critical for biodiversity monitoring and at the base of many ecosystems. With citizen science campaigns, billions of images are collected in the wild. Once these are labelled, experts can use them to create distribution maps. However, the labelling process is time consuming, which is where computer vision comes in. The field of computer vision offers a wide range of algorithms, each with its strengths and weaknesses; how do we identify the algorithm that is in line with our application? To answer this question, we provide a full and detailed evaluation of nine algorithms among deep convolutional networks (CNN), vision transformers (ViT) and locality-based vision transformers (LBVT) on 4 different aspects: classification performance, embedding quality, computational cost and gradient activity. We offer insights that we have not yet had in this domain proving to which extent these algorithms solve the fine-grained tasks in Insecta. We found that ViT performs the best on inference speed and computational cost, whereas LBVT outperforms the others on performance and embedding quality; the CNN provide a trade-off among the metrics.

通过细粒度分类,我们可以识别独特的特征来区分相同超类的不同类。我们将重点放在昆虫科的物种识别上,因为它们对生物多样性监测至关重要,也是许多生态系统的基础。随着公民科学运动的开展,数十亿张野外照片被收集起来。一旦这些标签被标记,专家就可以用它们来创建分布图。然而,标签过程是耗时的,这就是计算机视觉的用武之地。计算机视觉领域提供了各种各样的算法,每种算法都有其优缺点;我们如何识别符合我们应用程序的算法?为了回答这个问题,我们从分类性能、嵌入质量、计算成本和梯度活动四个不同方面对深度卷积网络(CNN)、视觉变压器(ViT)和基于位置的视觉变压器(LBVT)中的九种算法进行了全面而详细的评估。我们提供了我们在这个领域还没有的见解,证明了这些算法在多大程度上解决了昆虫中的细粒度任务。我们发现ViT在推理速度和计算成本上表现最好,而LBVT在性能和嵌入质量上优于其他方法;CNN提供了指标之间的权衡。
{"title":"Performance of Computer Vision Algorithms for Fine-Grained Classification Using Crowdsourced Insect Images","authors":"Rita Pucci,&nbsp;Vincent J. Kalkman,&nbsp;Dan Stowell","doi":"10.1049/cvi2.70006","DOIUrl":"10.1049/cvi2.70006","url":null,"abstract":"<p>With fine-grained classification, we identify unique characteristics to distinguish among classes of the same super-class. We are focusing on species recognition in Insecta as they are critical for biodiversity monitoring and at the base of many ecosystems. With citizen science campaigns, billions of images are collected in the wild. Once these are labelled, experts can use them to create distribution maps. However, the labelling process is time consuming, which is where computer vision comes in. The field of computer vision offers a wide range of algorithms, each with its strengths and weaknesses; how do we identify the algorithm that is in line with our application? To answer this question, we provide a full and detailed evaluation of nine algorithms among deep convolutional networks (CNN), vision transformers (ViT) and locality-based vision transformers (LBVT) on 4 different aspects: classification performance, embedding quality, computational cost and gradient activity. We offer insights that we have not yet had in this domain proving to which extent these algorithms solve the fine-grained tasks in Insecta. We found that ViT performs the best on inference speed and computational cost, whereas LBVT outperforms the others on performance and embedding quality; the CNN provide a trade-off among the metrics.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143778248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foundation Model Based Camouflaged Object Detection 基于基础模型的伪装目标检测
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-01 DOI: 10.1049/cvi2.70009
Zefeng Chen, Zhijiang Li, Yunqi Xue, Li Zhang

Camouflaged object detection (COD) aims to identify and segment objects that closely resemble and are seamlessly integrated into their surrounding environments, making it a challenging task in computer vision. COD is constrained by the limited availability of training data and annotated samples, and most carefully designed COD models exhibit diminished performance under low-data conditions. In recent years, there has been increasing interest in leveraging foundation models, which have demonstrated robust general capabilities and superior generalisation performance, to address COD challenges. This work proposes a knowledge-guided domain adaptation (KGDA) approach to tackle the data scarcity problem in COD. The method utilises the knowledge descriptions generated by multimodal large language models (MLLMs) for camouflaged images, aiming to enhance the model's comprehension of semantic objects and camouflaged scenes through highly abstract and generalised knowledge representations. To resolve ambiguities and errors in the generated text descriptions, a multi-level knowledge aggregation (MLKG) module is devised. This module consolidates consistent semantic knowledge and forms multi-level semantic knowledge features. To incorporate semantic knowledge into the visual foundation model, the authors introduce a knowledge-guided semantic enhancement adaptor (KSEA) that integrates the semantic knowledge of camouflaged objects while preserving the original knowledge of the foundation model. Extensive experiments demonstrate that our method surpasses 19 state-of-the-art approaches and exhibits strong generalisation capabilities even with limited annotated data.

伪装对象检测(COD)旨在识别和分割与周围环境紧密相似并无缝集成的对象,这使其成为计算机视觉中的一项具有挑战性的任务。COD受到训练数据和带注释样本的有限可用性的限制,大多数精心设计的COD模型在低数据条件下表现出较低的性能。近年来,人们对利用基础模型越来越感兴趣,这些模型已经证明了强大的通用能力和优越的泛化性能,以解决COD挑战。本文提出了一种知识引导的领域自适应(KGDA)方法来解决COD中的数据稀缺问题。该方法利用多模态大语言模型(mllm)对伪装图像生成的知识描述,旨在通过高度抽象和泛化的知识表示,增强模型对语义对象和伪装场景的理解。为了解决生成的文本描述中的歧义和错误,设计了多级知识聚合模块。该模块巩固了一致的语义知识,形成了多层次的语义知识特征。为了将语义知识整合到可视化基础模型中,作者引入了一种知识引导的语义增强适配器(KSEA),该适配器在保留基础模型原有知识的同时集成了伪装对象的语义知识。大量的实验表明,我们的方法超过了19种最先进的方法,即使在有限的注释数据下也表现出强大的泛化能力。
{"title":"Foundation Model Based Camouflaged Object Detection","authors":"Zefeng Chen,&nbsp;Zhijiang Li,&nbsp;Yunqi Xue,&nbsp;Li Zhang","doi":"10.1049/cvi2.70009","DOIUrl":"10.1049/cvi2.70009","url":null,"abstract":"<p>Camouflaged object detection (COD) aims to identify and segment objects that closely resemble and are seamlessly integrated into their surrounding environments, making it a challenging task in computer vision. COD is constrained by the limited availability of training data and annotated samples, and most carefully designed COD models exhibit diminished performance under low-data conditions. In recent years, there has been increasing interest in leveraging foundation models, which have demonstrated robust general capabilities and superior generalisation performance, to address COD challenges. This work proposes a knowledge-guided domain adaptation (KGDA) approach to tackle the data scarcity problem in COD. The method utilises the knowledge descriptions generated by multimodal large language models (MLLMs) for camouflaged images, aiming to enhance the model's comprehension of semantic objects and camouflaged scenes through highly abstract and generalised knowledge representations. To resolve ambiguities and errors in the generated text descriptions, a multi-level knowledge aggregation (MLKG) module is devised. This module consolidates consistent semantic knowledge and forms multi-level semantic knowledge features. To incorporate semantic knowledge into the visual foundation model, the authors introduce a knowledge-guided semantic enhancement adaptor (KSEA) that integrates the semantic knowledge of camouflaged objects while preserving the original knowledge of the foundation model. Extensive experiments demonstrate that our method surpasses 19 state-of-the-art approaches and exhibits strong generalisation capabilities even with limited annotated data.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70009","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143749464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Optimisation of Satellite Image-Based Crop Mapping: A Comparison of Deep Time Series and Semi-Supervised Time Warping Strategies 基于卫星图像作物制图的时间优化:深度时间序列与半监督时间翘曲策略的比较
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-26 DOI: 10.1049/cvi2.70014
Rosie Finnegan, Joseph Metcalfe, Sara Sharifzadeh, Fabio Caraffini, Xianghua Xie, Alberto Hornero, Nicholas W. Synes

This study presents a novel approach to crop mapping using remotely sensed satellite images. It addresses the significant classification modelling challenges, including (1) the requirements for extensive labelled data and (2) the complex optimisation problem for selection of appropriate temporal windows in the absence of prior knowledge of cultivation calendars. We compare the lightweight Dynamic Time Warping (DTW) classification method with the heavily supervised Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM) using high-resolution multispectral optical satellite imagery (3 m/pixel). Our approach integrates effective practical preprocessing steps, including data augmentation and a data-driven optimisation strategy for the temporal window, even in the presence of numerous crop classes. Our findings demonstrate that DTW, despite its lower data demands, can match the performance of CNN-LSTM through our effective preprocessing steps while significantly improving runtime. These results demonstrate that both CNN-LSTM and DTW can achieve deployment-level accuracy and underscore the potential of DTW as a viable alternative to more resource-intensive models. The results also prove the effectiveness of temporal windowing for improving runtime and accuracy of a crop classification study, even with no prior knowledge of planting timeframes.

本研究提出了一种利用遥感卫星图像进行作物制图的新方法。它解决了重大的分类建模挑战,包括(1)对大量标记数据的要求和(2)在缺乏种植日历先验知识的情况下选择适当时间窗口的复杂优化问题。我们将轻量级动态时间扭曲(DTW)分类方法与使用高分辨率多光谱光学卫星图像(3米/像素)的重监督卷积神经网络-长短期记忆(CNN-LSTM)进行比较。我们的方法集成了有效的实际预处理步骤,包括数据增强和数据驱动的时间窗口优化策略,即使在存在许多作物类的情况下也是如此。我们的研究结果表明,尽管DTW的数据需求较低,但通过我们有效的预处理步骤,DTW的性能可以与CNN-LSTM相匹配,同时显著提高了运行时间。这些结果表明,CNN-LSTM和DTW都可以达到部署级精度,并强调了DTW作为资源密集型模型的可行替代方案的潜力。结果还证明了时间窗口对于提高作物分类研究的运行时间和准确性的有效性,即使没有种植时间框架的先验知识。
{"title":"Temporal Optimisation of Satellite Image-Based Crop Mapping: A Comparison of Deep Time Series and Semi-Supervised Time Warping Strategies","authors":"Rosie Finnegan,&nbsp;Joseph Metcalfe,&nbsp;Sara Sharifzadeh,&nbsp;Fabio Caraffini,&nbsp;Xianghua Xie,&nbsp;Alberto Hornero,&nbsp;Nicholas W. Synes","doi":"10.1049/cvi2.70014","DOIUrl":"10.1049/cvi2.70014","url":null,"abstract":"<p>This study presents a novel approach to crop mapping using remotely sensed satellite images. It addresses the significant classification modelling challenges, including (1) the requirements for extensive labelled data and (2) the complex optimisation problem for selection of appropriate temporal windows in the absence of prior knowledge of cultivation calendars. We compare the lightweight Dynamic Time Warping (DTW) classification method with the heavily supervised Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM) using high-resolution multispectral optical satellite imagery (3 m/pixel). Our approach integrates effective practical preprocessing steps, including data augmentation and a data-driven optimisation strategy for the temporal window, even in the presence of numerous crop classes. Our findings demonstrate that DTW, despite its lower data demands, can match the performance of CNN-LSTM through our effective preprocessing steps while significantly improving runtime. These results demonstrate that both CNN-LSTM and DTW can achieve deployment-level accuracy and underscore the potential of DTW as a viable alternative to more resource-intensive models. The results also prove the effectiveness of temporal windowing for improving runtime and accuracy of a crop classification study, even with no prior knowledge of planting timeframes.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143707264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crafting Transferable Adversarial Examples Against 3D Object Detection 制作可转移的对抗3D物体检测的例子
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-26 DOI: 10.1049/cvi2.70011
Haiyan Long, Hai Chen, Mengyao Xu, Chonghao Zhang, Fulan Qian

3D object detection is one of the current popular hotspots by perceiving the surrounding environment through LiDAR and camera sensors to recognise the category and location of objects in the scene. Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples. Although some approaches have begun to investigate the robustness of 3D object detection models, they are currently generating adversarial examples in a white-box setting and there is a lack of research into generating transferable adversarial examples in a black-box setting. In this paper, a non-end-to-end attack algorithm was proposed for LiDAR pipelines that crafts transferable adversarial examples against 3D object detection. Specifically, the method generates adversarial examples by restraining features with high contribution to downstream tasks and amplifying features with low contribution to downstream tasks in the feature space. Extensive experiments validate that the method produces more transferable adversarial point clouds, for example, the method generates adversarial point clouds in the nuScenes dataset that are about 10% $%$ and 7% $%$ better than the state-of-the-art method on mAP and NDS, respectively.

通过激光雷达和摄像传感器感知周围环境,识别场景中物体的种类和位置,是目前的热门热点之一。深度神经网络(dnn)已被发现容易受到对抗性示例的影响。尽管一些方法已经开始研究3D目标检测模型的鲁棒性,但它们目前是在白盒环境中生成对抗示例,而在黑盒环境中生成可转移的对抗示例的研究缺乏。本文提出了一种针对激光雷达管道的非端到端攻击算法,该算法可以生成针对3D目标检测的可转移对抗示例。具体而言,该方法通过在特征空间中抑制对下游任务贡献大的特征,放大对下游任务贡献小的特征来生成对抗样例。大量的实验验证了该方法产生了更多可转移的对抗性点云,例如,该方法在nuScenes数据集中产生的对抗性点云分别比mAP和NDS上最先进的方法好10%和7%。
{"title":"Crafting Transferable Adversarial Examples Against 3D Object Detection","authors":"Haiyan Long,&nbsp;Hai Chen,&nbsp;Mengyao Xu,&nbsp;Chonghao Zhang,&nbsp;Fulan Qian","doi":"10.1049/cvi2.70011","DOIUrl":"10.1049/cvi2.70011","url":null,"abstract":"<p>3D object detection is one of the current popular hotspots by perceiving the surrounding environment through LiDAR and camera sensors to recognise the category and location of objects in the scene. Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples. Although some approaches have begun to investigate the robustness of 3D object detection models, they are currently generating adversarial examples in a white-box setting and there is a lack of research into generating transferable adversarial examples in a black-box setting. In this paper, a non-end-to-end attack algorithm was proposed for LiDAR pipelines that crafts transferable adversarial examples against 3D object detection. Specifically, the method generates adversarial examples by restraining features with high contribution to downstream tasks and amplifying features with low contribution to downstream tasks in the feature space. Extensive experiments validate that the method produces more transferable adversarial point clouds, for example, the method generates adversarial point clouds in the nuScenes dataset that are about 10<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>%</mi>\u0000 </mrow>\u0000 <annotation> $%$</annotation>\u0000 </semantics></math> and 7<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>%</mi>\u0000 </mrow>\u0000 <annotation> $%$</annotation>\u0000 </semantics></math> better than the state-of-the-art method on mAP and NDS, respectively.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143707265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent Advances of Continual Learning in Computer Vision: An Overview 计算机视觉中持续学习的最新进展:概述
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-19 DOI: 10.1049/cvi2.70013
Haoxuan Qu, Hossein Rahmani, Li Xu, Bryan Williams, Jun Liu

In contrast to batch learning where all training data is available at once, continual learning represents a family of methods that accumulate knowledge and learn continuously with data available in sequential order. Similar to the human learning process with the ability of learning, fusing and accumulating new knowledge acquired at different time steps, continual learning is considered to have high practical significance. Hence, continual learning has been studied in various artificial intelligence tasks. In this paper, we present a comprehensive review of the recent progress of continual learning in computer vision. In particular, the works are grouped by their representative techniques, including regularisation, knowledge distillation, memory, generative replay, parameter isolation and a combination of the above techniques. For each category of these techniques, both its characteristics and applications in computer vision are presented. At the end of this overview, several subareas, where continuous knowledge accumulation is potentially helpful while continual learning has not been well studied, are discussed.

与一次性获得所有训练数据的批处理学习相反,持续学习代表了一系列方法,这些方法可以积累知识,并使用顺序可用的数据连续学习。人类的学习过程具有学习、融合和积累在不同时间步长的新知识的能力,因此持续学习被认为具有很高的现实意义。因此,持续学习在各种人工智能任务中得到了研究。本文对计算机视觉中持续学习的最新进展进行了综述。特别地,这些作品按其代表性技术分组,包括正则化,知识蒸馏,记忆,生成重播,参数隔离以及上述技术的组合。介绍了这些技术的特点及其在计算机视觉中的应用。在本概述的最后,讨论了几个子领域,在这些领域中,持续的知识积累可能是有帮助的,而持续的学习尚未得到很好的研究。
{"title":"Recent Advances of Continual Learning in Computer Vision: An Overview","authors":"Haoxuan Qu,&nbsp;Hossein Rahmani,&nbsp;Li Xu,&nbsp;Bryan Williams,&nbsp;Jun Liu","doi":"10.1049/cvi2.70013","DOIUrl":"10.1049/cvi2.70013","url":null,"abstract":"<p>In contrast to batch learning where all training data is available at once, continual learning represents a family of methods that accumulate knowledge and learn continuously with data available in sequential order. Similar to the human learning process with the ability of learning, fusing and accumulating new knowledge acquired at different time steps, continual learning is considered to have high practical significance. Hence, continual learning has been studied in various artificial intelligence tasks. In this paper, we present a comprehensive review of the recent progress of continual learning in computer vision. In particular, the works are grouped by their representative techniques, including regularisation, knowledge distillation, memory, generative replay, parameter isolation and a combination of the above techniques. For each category of these techniques, both its characteristics and applications in computer vision are presented. At the end of this overview, several subareas, where continuous knowledge accumulation is potentially helpful while continual learning has not been well studied, are discussed.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Multi-Object Tracking in Recent Times 近年来多目标跟踪研究综述
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-09 DOI: 10.1049/cvi2.70010
Suya Li, Hengyi Ren, Xin Xie, Ying Cao

Multi-object tracking (MOT) is a fundamental problem in computer vision that involves tracing the trajectories of foreground targets throughout a video sequence while establishing correspondences for identical objects across frames. With the advancement of deep learning techniques, methods based on deep learning have significantly improved accuracy and efficiency in MOT. This paper reviews several recent deep learning-based MOT methods and categorises them into three main groups: detection-based, single-object tracking (SOT)-based, and segmentation-based methods, according to their core technologies. Additionally, this paper discusses the metrics and datasets used for evaluating MOT performance, the challenges faced in the field, and future directions for research.

多目标跟踪(MOT)是计算机视觉中的一个基本问题,它涉及在整个视频序列中跟踪前景目标的轨迹,同时建立跨帧相同对象的对应关系。随着深度学习技术的进步,基于深度学习的方法显著提高了MOT的精度和效率。本文回顾了最近几种基于深度学习的MOT方法,并根据其核心技术将其分为三大类:基于检测的方法、基于单目标跟踪(SOT)的方法和基于分割的方法。此外,本文还讨论了用于评估MOT性能的指标和数据集,该领域面临的挑战以及未来的研究方向。
{"title":"A Review of Multi-Object Tracking in Recent Times","authors":"Suya Li,&nbsp;Hengyi Ren,&nbsp;Xin Xie,&nbsp;Ying Cao","doi":"10.1049/cvi2.70010","DOIUrl":"10.1049/cvi2.70010","url":null,"abstract":"<p>Multi-object tracking (MOT) is a fundamental problem in computer vision that involves tracing the trajectories of foreground targets throughout a video sequence while establishing correspondences for identical objects across frames. With the advancement of deep learning techniques, methods based on deep learning have significantly improved accuracy and efficiency in MOT. This paper reviews several recent deep learning-based MOT methods and categorises them into three main groups: detection-based, single-object tracking (SOT)-based, and segmentation-based methods, according to their core technologies. Additionally, this paper discusses the metrics and datasets used for evaluating MOT performance, the challenges faced in the field, and future directions for research.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143581368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TAPCNet: Tactile-Assisted Point Cloud Completion Network via Iterative Fusion Strategy 基于迭代融合策略的触觉辅助点云补全网络
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-07 DOI: 10.1049/cvi2.70012
Yangrong Liu, Jian Li, Huaiyu Wang, Ming Lu, Haorao Shen, Qin Wang

With the development of the 3D point cloud field in recent years, point cloud completion of 3D objects has increasingly attracted researchers' attention. Point cloud data can accurately express the shape information of 3D objects at different resolutions, but the original point clouds collected directly by various 3D scanning equipment are often incomplete and have uneven density. Tactile is one distinctive way to perceive the 3D shape of an object. Tactile point clouds can provide local shape information for unknown areas during completion, which is a valuable complement to the point cloud data acquired with visual devices. In order to effectively improve the effect of point cloud completion using tactile information, the authors propose an innovative tactile-assisted point cloud completion network, TAPCNet. This network is the first neural network customised for the input of tactile point clouds and incomplete point clouds, which can fuse two types of point cloud information in the feature domain. Besides, a new dataset named 3DVT was rebuilt, to fit the proposed network model. Based on the tactile fusion strategy and related modules, multiple comparative experiments were conducted by controlling the quantity of tactile point clouds on the 3DVT dataset. The experimental data illustrates that TAPCNet can outperform the state-of-the-art methods in the benchmark.

随着近年来三维点云领域的发展,三维物体的点云补全越来越受到研究者的关注。点云数据可以准确表达不同分辨率下三维物体的形状信息,但各种三维扫描设备直接采集的原始点云往往不完整,密度不均匀。触觉是感知物体三维形状的一种独特方式。触觉点云可以在补全过程中提供未知区域的局部形状信息,是对视觉设备获得的点云数据的有益补充。为了有效提高利用触觉信息进行点云补全的效果,作者提出了一种创新的触觉辅助点云补全网络TAPCNet。该网络是首个针对触觉点云和不完全点云输入定制的神经网络,可以在特征域融合两种类型的点云信息。此外,重建了一个名为3DVT的新数据集,以拟合所提出的网络模型。基于触觉融合策略和相关模块,通过控制3DVT数据集上触觉点云的数量,进行了多次对比实验。实验数据表明,在基准测试中,TAPCNet的性能优于最先进的方法。
{"title":"TAPCNet: Tactile-Assisted Point Cloud Completion Network via Iterative Fusion Strategy","authors":"Yangrong Liu,&nbsp;Jian Li,&nbsp;Huaiyu Wang,&nbsp;Ming Lu,&nbsp;Haorao Shen,&nbsp;Qin Wang","doi":"10.1049/cvi2.70012","DOIUrl":"10.1049/cvi2.70012","url":null,"abstract":"<p>With the development of the 3D point cloud field in recent years, point cloud completion of 3D objects has increasingly attracted researchers' attention. Point cloud data can accurately express the shape information of 3D objects at different resolutions, but the original point clouds collected directly by various 3D scanning equipment are often incomplete and have uneven density. Tactile is one distinctive way to perceive the 3D shape of an object. Tactile point clouds can provide local shape information for unknown areas during completion, which is a valuable complement to the point cloud data acquired with visual devices. In order to effectively improve the effect of point cloud completion using tactile information, the authors propose an innovative tactile-assisted point cloud completion network, TAPCNet. This network is the first neural network customised for the input of tactile point clouds and incomplete point clouds, which can fuse two types of point cloud information in the feature domain. Besides, a new dataset named 3DVT was rebuilt, to fit the proposed network model. Based on the tactile fusion strategy and related modules, multiple comparative experiments were conducted by controlling the quantity of tactile point clouds on the 3DVT dataset. The experimental data illustrates that TAPCNet can outperform the state-of-the-art methods in the benchmark.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143571267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1