首页 > 最新文献

IET Computer Vision最新文献

英文 中文
A Multi-Layer Convolutional Sparse Network for Pattern Classification Based on Sequential Dictionary Learning 基于顺序字典学习的多层卷积稀疏网络模式分类
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-11 DOI: 10.1049/cvi2.70055
Farhad Sadeghi Almalou, Farbod Razzazi, Arash Amini

Convolutional sparse coding (CSC) using learnt convolutional dictionaries has recently emerged as an effective technique for emphasising discriminative structures in signal and image processing applications. In this paper, we propose a multilayer model for convolutional sparse networks (CSNs), based on hierarchical convolutional sparse coding and dictionary learning, as a competitive alternative to conventional deep convolutional neural networks (CNNs). In the proposed CSN architecture, each layer learns a convolutional dictionary from the feature maps of the preceding layer (if available), and then uses it to extract sparse representations. This hierarchical process is repeated to obtain high-level feature maps in the final layer, suitable for pattern recognition and classification tasks. One key advantage of the CSN framework is its reduced sensitivity to training set size and its significantly lower computational complexity compared to CNNs. Experimental results on image classification tasks show that the proposed model achieves up to 7% higher accuracy than CNNs when trained with only 150 samples, while reducing computational cost by at least 50% under similar conditions.

卷积稀疏编码(CSC)是近年来在信号和图像处理应用中强调判别结构的一种有效技术。在本文中,我们提出了一种基于分层卷积稀疏编码和字典学习的卷积稀疏网络(CSNs)多层模型,作为传统深度卷积神经网络(cnn)的竞争替代品。在提出的CSN体系结构中,每一层从前一层的特征映射(如果可用)中学习卷积字典,然后使用它来提取稀疏表示。这种分层过程在最后一层得到高级特征映射,适用于模式识别和分类任务。CSN框架的一个关键优势是它对训练集大小的敏感性降低,与cnn相比,它的计算复杂度显著降低。在图像分类任务上的实验结果表明,当仅使用150个样本训练时,该模型的准确率比cnn提高了7%,而在相似条件下,该模型的计算成本至少降低了50%。
{"title":"A Multi-Layer Convolutional Sparse Network for Pattern Classification Based on Sequential Dictionary Learning","authors":"Farhad Sadeghi Almalou,&nbsp;Farbod Razzazi,&nbsp;Arash Amini","doi":"10.1049/cvi2.70055","DOIUrl":"https://doi.org/10.1049/cvi2.70055","url":null,"abstract":"<p>Convolutional sparse coding (CSC) using learnt convolutional dictionaries has recently emerged as an effective technique for emphasising discriminative structures in signal and image processing applications. In this paper, we propose a multilayer model for convolutional sparse networks (CSNs), based on hierarchical convolutional sparse coding and dictionary learning, as a competitive alternative to conventional deep convolutional neural networks (CNNs). In the proposed CSN architecture, each layer learns a convolutional dictionary from the feature maps of the preceding layer (if available), and then uses it to extract sparse representations. This hierarchical process is repeated to obtain high-level feature maps in the final layer, suitable for pattern recognition and classification tasks. One key advantage of the CSN framework is its reduced sensitivity to training set size and its significantly lower computational complexity compared to CNNs. Experimental results on image classification tasks show that the proposed model achieves up to 7% higher accuracy than CNNs when trained with only 150 samples, while reducing computational cost by at least 50% under similar conditions.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"20 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145970000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Fire Heat Release Rate Using Deep Perceptual and Detail-Aware Hybrid Feature Fusion From Early Smoke Signals 基于早期烟雾信号的深度感知和细节感知混合特征融合预测火灾热释放率
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-04 DOI: 10.1049/cvi2.70054
Tianliang Liu, Jinkai Wang, Xu Zhou, Jun Wan, Xiaogang Cheng, Xiubin Dai

With urbanisation accelerating, predicting the heat release rate (HRR) of building fires using visual data has emerged as a pivotal research focus in the field of fire rescue. However, existing approaches face challenges, such as limited training data and complex models, which lead to suboptimal performance and slow inference speeds. To address these issues and adapt to the rapid morphological changes of smoke in dynamic fire environments, we propose a lightweight neural network prediction model based on adaptive pooling with channel information interaction (APCI). This model can achieve high precision while maintaining faster inference speed. Our approach employs simplified dense connections to propagate shallow smoke features, thereby effectively capturing the relationship between smoke textures and multiscale features to accommodate the variations of smoke morphologies. To mitigate the loss of smoke features caused by spatial misalignment and ventilation disturbances during downsampling, we introduce an adaptive weighted pooling mechanism that fully leverages the detailed information contained in the invoked smoke. Additionally, an enhanced channel shuffle operation in channel information interaction ensures effective cross-level transfer to detail-aware information exchange during sudden escalations in fire intensity in the hybrid feature fusion framework. Experiments on the smoke-heat release rate dataset we created demonstrate that the proposed method can achieve a coefficient of determination R2 $left({R}^{2}right)$ of 0.937, a root mean square error (RMSE) of 23.0 kW, a mean absolute error (MAE) of 17.4 kW and with an inference time of 4.13 ms per image.

随着城市化进程的加快,利用可视化数据预测建筑火灾的热释放率已成为火灾救援领域的关键研究热点。然而,现有的方法面临挑战,如有限的训练数据和复杂的模型,导致次优性能和缓慢的推理速度。为了解决这些问题并适应动态火灾环境中烟雾形态的快速变化,我们提出了一种基于信道信息交互自适应池(APCI)的轻量级神经网络预测模型。该模型可以在保持较快推理速度的同时实现较高的推理精度。我们的方法采用简化的密集连接来传播浅烟雾特征,从而有效地捕获烟雾纹理和多尺度特征之间的关系,以适应烟雾形态的变化。为了减轻下采样过程中空间错位和通风干扰造成的烟雾特征损失,我们引入了一种自适应加权池化机制,该机制充分利用了被调用烟雾中包含的详细信息。此外,在混合特征融合框架中,信道信息交互中增强的信道洗牌操作确保在火灾强度突然升级时有效地跨层传输到细节感知信息交换。在我们创建的烟热释放率数据集上的实验表明,所提出的方法可以实现决定系数r2 $left({R}^{2}right)$为0.937,均方根误差(RMSE)为23.0 kW。平均绝对误差(MAE)为17.4 kW,每张图像的推理时间为4.13 ms。
{"title":"Predicting Fire Heat Release Rate Using Deep Perceptual and Detail-Aware Hybrid Feature Fusion From Early Smoke Signals","authors":"Tianliang Liu,&nbsp;Jinkai Wang,&nbsp;Xu Zhou,&nbsp;Jun Wan,&nbsp;Xiaogang Cheng,&nbsp;Xiubin Dai","doi":"10.1049/cvi2.70054","DOIUrl":"https://doi.org/10.1049/cvi2.70054","url":null,"abstract":"<p>With urbanisation accelerating, predicting the heat release rate (HRR) of building fires using visual data has emerged as a pivotal research focus in the field of fire rescue. However, existing approaches face challenges, such as limited training data and complex models, which lead to suboptimal performance and slow inference speeds. To address these issues and adapt to the rapid morphological changes of smoke in dynamic fire environments, we propose a lightweight neural network prediction model based on adaptive pooling with channel information interaction (APCI). This model can achieve high precision while maintaining faster inference speed. Our approach employs simplified dense connections to propagate shallow smoke features, thereby effectively capturing the relationship between smoke textures and multiscale features to accommodate the variations of smoke morphologies. To mitigate the loss of smoke features caused by spatial misalignment and ventilation disturbances during downsampling, we introduce an adaptive weighted pooling mechanism that fully leverages the detailed information contained in the invoked smoke. Additionally, an enhanced channel shuffle operation in channel information interaction ensures effective cross-level transfer to detail-aware information exchange during sudden escalations in fire intensity in the hybrid feature fusion framework. Experiments on the smoke-heat release rate dataset we created demonstrate that the proposed method can achieve a coefficient of determination <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mfenced>\u0000 <msup>\u0000 <mi>R</mi>\u0000 <mn>2</mn>\u0000 </msup>\u0000 </mfenced>\u0000 </mrow>\u0000 <annotation> $left({R}^{2}right)$</annotation>\u0000 </semantics></math> of 0.937, a root mean square error (RMSE) of 23.0 kW, a mean absolute error (MAE) of 17.4 kW and with an inference time of 4.13 ms per image.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"20 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70054","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TaiChi-AQA: A Dataset and Framework for Action Quality Assessment and Visual Analysis 行动品质评估与视觉分析之资料集与架构
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-29 DOI: 10.1049/cvi2.70053
Dejin Wang, Fengyan Lin, Kexin Zhu, Zhide Chen

Action Quality Assessment (AQA) has become an advanced technology applied in various domains. However, most existing datasets focus on sports events, such as the Olympics, whereas datasets tailored for daily exercise activities remain scarce. Additionally, many of these datasets are unsuitable for direct application in AQA tasks. To address these limitations, we constructed a new AQA dataset, TaiChi-AQA, which includes detailed scoring annotations. Our dataset comprises 1313 Tai Chi action videos and features a comprehensive set of fine-grained labels, including action labels, action descriptions and frame-level perspective information. To validate the effectiveness of TaiChi-AQA, we systematically evaluated it using a variety of popular AQA methods. We also propose a straightforward yet effective module that integrates a multi-head attention mechanism with a gated multilayer perceptron (gMLP). This module is combined with the distributed autoencoder (DAE) framework. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the TaiChi-AQA dataset. The dataset are publicly available at https://github.com/mlxger/TaiChi-AQA.

行动质量评估(AQA)已成为一项应用于各个领域的先进技术。然而,大多数现有的数据集都集中在体育赛事上,比如奥运会,而为日常锻炼活动量身定制的数据集仍然很少。此外,许多这些数据集不适合直接应用于AQA任务。为了解决这些限制,我们构建了一个新的AQA数据集,TaiChi-AQA,其中包括详细的评分注释。我们的数据集包括1313个太极动作视频,并具有一套全面的细粒度标签,包括动作标签、动作描述和帧级视角信息。为了验证TaiChi-AQA的有效性,我们使用多种流行的AQA方法对其进行了系统的评估。我们还提出了一个简单而有效的模块,该模块将多头注意机制与门控多层感知器(gMLP)集成在一起。该模块与分布式自编码器(DAE)框架相结合。大量的实验表明,我们的方法在TaiChi-AQA数据集上达到了最先进的性能。该数据集可在https://github.com/mlxger/TaiChi-AQA上公开获取。
{"title":"TaiChi-AQA: A Dataset and Framework for Action Quality Assessment and Visual Analysis","authors":"Dejin Wang,&nbsp;Fengyan Lin,&nbsp;Kexin Zhu,&nbsp;Zhide Chen","doi":"10.1049/cvi2.70053","DOIUrl":"https://doi.org/10.1049/cvi2.70053","url":null,"abstract":"<p>Action Quality Assessment (AQA) has become an advanced technology applied in various domains. However, most existing datasets focus on sports events, such as the Olympics, whereas datasets tailored for daily exercise activities remain scarce. Additionally, many of these datasets are unsuitable for direct application in AQA tasks. To address these limitations, we constructed a new AQA dataset, TaiChi-AQA, which includes detailed scoring annotations. Our dataset comprises 1313 Tai Chi action videos and features a comprehensive set of fine-grained labels, including action labels, action descriptions and frame-level perspective information. To validate the effectiveness of TaiChi-AQA, we systematically evaluated it using a variety of popular AQA methods. We also propose a straightforward yet effective module that integrates a multi-head attention mechanism with a gated multilayer perceptron (gMLP). This module is combined with the distributed autoencoder (DAE) framework. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the TaiChi-AQA dataset. The dataset are publicly available at https://github.com/mlxger/TaiChi-AQA.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"20 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70053","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145887814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior Matters: Contribution- and Semantics-Aware Prior Estimation for Few-Shot Learning 先验问题:基于贡献和语义的小样本学习先验估计
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-14 DOI: 10.1049/cvi2.70051
Yanling Tian, Jiaying Wu, Jinglu Hu

Few-shot learning (FSL) aims to classify novel categories using only a few labelled examples, which poses significant challenges for generalisation. Among existing approaches, distribution-based methods have shown promise by constructing class distributions for novel categories using statistical priors transferred from base classes. However, these methods often rely on nearest-neighbour visual similarity and assume equal contributions from selected base classes, which can lead to inaccurate priors. In this paper, we propose CAPE (contribution-aware prior estimation), a method that addresses this issue from two complementary perspectives. On the one hand, CAPE assigns adaptive weights to base class prototypes based on their relevance to the novel support set, mitigating the limitations of equal-contribution assumptions. On the other hand, to compensate for the ambiguity of visual features, especially in the 1-shot scenario, we incorporate semantic information from category labels to enhance prior selection. By jointly leveraging visual and semantic information, CAPE constructs more accurate and robust priors for the feature distributions of novel classes. Extensive experiments on four widely used FSL benchmarks, including mini ImageNet, tieredImageNet, CIFAR-FS and CUB datasets, demonstrate that our method consistently outperforms existing approaches, highlighting the effectiveness of contribution- and semantics-aware prior estimation.

few -shot learning (FSL)的目的是只使用几个标记的例子对新类别进行分类,这对泛化提出了重大挑战。在现有的方法中,基于分布的方法通过使用从基类转移的统计先验来构建新类别的类分布,显示出了希望。然而,这些方法通常依赖于最近邻的视觉相似性,并假设所选基类的贡献相等,这可能导致不准确的先验。在本文中,我们提出了CAPE(贡献感知先验估计),一种从两个互补的角度解决这个问题的方法。一方面,CAPE根据基类原型与新支持集的相关性为其分配自适应权重,减轻了等贡献假设的局限性。另一方面,为了弥补视觉特征的模糊性,特别是在单镜头场景中,我们结合了来自类别标签的语义信息来增强先验选择。通过联合利用视觉和语义信息,CAPE为新类别的特征分布构建了更准确和鲁棒的先验。在四种广泛使用的FSL基准(包括mini ImageNet、tieredImageNet、CIFAR-FS和CUB数据集)上进行的大量实验表明,我们的方法始终优于现有方法,突出了贡献感知和语义感知先验估计的有效性。
{"title":"Prior Matters: Contribution- and Semantics-Aware Prior Estimation for Few-Shot Learning","authors":"Yanling Tian,&nbsp;Jiaying Wu,&nbsp;Jinglu Hu","doi":"10.1049/cvi2.70051","DOIUrl":"10.1049/cvi2.70051","url":null,"abstract":"<p>Few-shot learning (FSL) aims to classify novel categories using only a few labelled examples, which poses significant challenges for generalisation. Among existing approaches, distribution-based methods have shown promise by constructing class distributions for novel categories using statistical priors transferred from base classes. However, these methods often rely on nearest-neighbour visual similarity and assume equal contributions from selected base classes, which can lead to inaccurate priors. In this paper, we propose CAPE (contribution-aware prior estimation), a method that addresses this issue from two complementary perspectives. On the one hand, CAPE assigns adaptive weights to base class prototypes based on their relevance to the novel support set, mitigating the limitations of equal-contribution assumptions. On the other hand, to compensate for the ambiguity of visual features, especially in the 1-shot scenario, we incorporate semantic information from category labels to enhance prior selection. By jointly leveraging visual and semantic information, CAPE constructs more accurate and robust priors for the feature distributions of novel classes. Extensive experiments on four widely used FSL benchmarks, including <i>mini</i> ImageNet, tieredImageNet, CIFAR-FS and CUB datasets, demonstrate that our method consistently outperforms existing approaches, highlighting the effectiveness of contribution- and semantics-aware prior estimation.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-To-End Multiple Object Detection and Tracking With Spatio-Temporal Transformers 基于时空变换的端到端多目标检测与跟踪
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-14 DOI: 10.1049/cvi2.70052
Qi Lei, Xiangyu Song, Shijie Sun, Huansheng Song, Lichen Liu, Zhaoyang Zhang

Optimising both trajectory position information and identity information is a key challenge in multiple object tracking. Mainstream approaches ensure ID consistency by combining detection data with various additional information. However, many methods overlook the inherent spatio-temporal correlation of trajectory position information. We argue that additional modules are redundant, and that forecasting trajectories directly without the need for interframe association by utilising motion constraints is adequate. In this study, we introduce a novel end-to-end network called the spatio-temporal multiple object tracking with transformer (STMOTR), which employs motion constraints to establish binary matching within the reconstructed deformable-DETR network, heuristically learning object trajectories from the Video Swin backbone. This subtly constrained matching rule not only keeps the detection ID consistency but also significantly reduces the potential for tracking ID switch. We evaluated STMOTR on the UA-DETRAC and our proposed tunnel multiple object tracking dataset (T-MOT), achieving state-of-the-art performance with 39.8% PR-MOTA on the UA-DETRAC and 79.6% MOTA on the T-MOT. The source code is also available at https://github.com/Jade-Ray/STMOTR.

同时优化轨迹位置信息和身份信息是多目标跟踪的关键问题。主流方法通过将检测数据与各种附加信息相结合来保证ID的一致性。然而,许多方法忽略了轨迹位置信息固有的时空相关性。我们认为额外的模块是多余的,直接预测轨迹而不需要利用运动约束进行帧间关联是足够的。在本研究中,我们引入了一种新颖的端到端网络,称为带变压器的时空多目标跟踪(STMOTR),它利用运动约束在重构的变形- detr网络中建立二进制匹配,启发式地学习来自Video Swin主干的目标轨迹。这种巧妙约束的匹配规则不仅保持了检测ID的一致性,而且显著降低了跟踪ID切换的可能性。我们在UA-DETRAC和我们提出的隧道多目标跟踪数据集(T-MOT)上评估了stmor,在UA-DETRAC和T-MOT上分别获得了39.8%和79.6%的最佳性能。源代码也可从https://github.com/Jade-Ray/STMOTR获得。
{"title":"End-To-End Multiple Object Detection and Tracking With Spatio-Temporal Transformers","authors":"Qi Lei,&nbsp;Xiangyu Song,&nbsp;Shijie Sun,&nbsp;Huansheng Song,&nbsp;Lichen Liu,&nbsp;Zhaoyang Zhang","doi":"10.1049/cvi2.70052","DOIUrl":"10.1049/cvi2.70052","url":null,"abstract":"<p>Optimising both trajectory position information and identity information is a key challenge in multiple object tracking. Mainstream approaches ensure ID consistency by combining detection data with various additional information. However, many methods overlook the inherent spatio-temporal correlation of trajectory position information. We argue that additional modules are redundant, and that forecasting trajectories directly without the need for interframe association by utilising motion constraints is adequate. In this study, we introduce a novel end-to-end network called the spatio-temporal multiple object tracking with transformer (STMOTR), which employs motion constraints to establish binary matching within the reconstructed deformable-DETR network, heuristically learning object trajectories from the Video Swin backbone. This subtly constrained matching rule not only keeps the detection ID consistency but also significantly reduces the potential for tracking ID switch. We evaluated STMOTR on the UA-DETRAC and our proposed tunnel multiple object tracking dataset (T-MOT), achieving state-of-the-art performance with 39.8% PR-MOTA on the UA-DETRAC and 79.6% MOTA on the T-MOT. The source code is also available at https://github.com/Jade-Ray/STMOTR.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70052","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Dual-Branch Meta-Learner for Few-Shot HSI Classification With Cross-Domain Adaptation 基于跨域自适应的轻量级双分支元学习器
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-12 DOI: 10.1049/cvi2.70050
Junqi Yao, Yonghui Yang, Ou Yang, Qingtian Wu

Hyperspectral imaging (HSI) plays a crucial role in urban area analysis from satellite data and supports the continuous advancement of intelligent cities. However, its practical deployment is hindered by two major challenges: the scarcity of reliable training annotations and the high spectral similarity among different land-cover classes. To address these issues, this paper introduces a novel meta-learning framework that synergistically combines knowledge transfer across domains with a dual-adjustment mode (comprising intracorrection (IC) and interalignment (IA)), while ensuring end-to-end trainability. Our contributions are twofold. (1) We refine the 3D attention network TGAN into TGAN2 (3D ghost attention network v2) by replacing the original ghost blocks with ghost-V2 modules and enlarging the receptive field to capture global context. (2) We propose a dual-adjustment mode (comprising intracorrection (IC) and interalignment (IA)) to generate robust class prototypes and mitigate domain shift. These innovations are integrated into our overarching framework, DMCM2 (dual-adjustment cross-domain meta-learning framework v2), which is unified by its end-to-end trainability and efficiency. The code and models will be publicly available at: https://github.com/YAO-JQ/DMCM2.

高光谱成像(HSI)在卫星数据的城市区域分析中发挥着至关重要的作用,支持着智慧城市的不断发展。然而,它的实际部署受到两个主要挑战的阻碍:缺乏可靠的训练注释和不同土地覆盖类别之间的高光谱相似性。为了解决这些问题,本文引入了一种新的元学习框架,该框架将跨领域的知识转移与双调整模式(包括内校正(IC)和互对齐(IA))协同结合,同时确保端到端可训练性。我们的贡献是双重的。(1)我们将三维注意力网络TGAN细化为TGAN2 (3D ghost attention network v2),将原有的鬼块替换为鬼- v2模块,并扩大感受野以捕捉全局上下文。(2)我们提出了一种双调整模式(包括内校正(IC)和互校正(IA))来生成鲁棒的类原型并减轻域漂移。这些创新被整合到我们的总体框架DMCM2(双调整跨域元学习框架v2)中,该框架具有端到端的可训练性和效率。代码和模型将在https://github.com/YAO-JQ/DMCM2上公开提供。
{"title":"A Lightweight Dual-Branch Meta-Learner for Few-Shot HSI Classification With Cross-Domain Adaptation","authors":"Junqi Yao,&nbsp;Yonghui Yang,&nbsp;Ou Yang,&nbsp;Qingtian Wu","doi":"10.1049/cvi2.70050","DOIUrl":"10.1049/cvi2.70050","url":null,"abstract":"<p>Hyperspectral imaging (HSI) plays a crucial role in urban area analysis from satellite data and supports the continuous advancement of intelligent cities. However, its practical deployment is hindered by two major challenges: the scarcity of reliable training annotations and the high spectral similarity among different land-cover classes. To address these issues, this paper introduces a novel meta-learning framework that synergistically combines knowledge transfer across domains with a dual-adjustment mode (comprising intracorrection (IC) and interalignment (IA)), while ensuring end-to-end trainability. Our contributions are twofold. (1) We refine the 3D attention network TGAN into TGAN2 (3D ghost attention network v2) by replacing the original ghost blocks with ghost-V2 modules and enlarging the receptive field to capture global context. (2) We propose a dual-adjustment mode (comprising intracorrection (IC) and interalignment (IA)) to generate robust class prototypes and mitigate domain shift. These innovations are integrated into our overarching framework, DMCM2 (dual-adjustment cross-domain meta-learning framework v2), which is unified by its end-to-end trainability and efficiency. The code and models will be publicly available at: https://github.com/YAO-JQ/DMCM2.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70050","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145739919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SwapDiffusion: Flexible Swapping Disentangled Content-Style Embeddings in P + $mathcal{P}+$ Space for Diffusion Models SwapDiffusion:在P +$ mathcal{P}+$空间中灵活地交换离散的内容样式嵌入
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1049/cvi2.70048
Yongxing He, Zejian Li, Wei Li, Xinlong Zhang, Jia Wei, Yongchuan Tang

This paper introduces SwapDiffusion, a novel framework for content-style disentanglement in diffusion-based image generation. We advance the understanding of the extended textual conditioning (P+ $mathcal{P}+$) space in SDXL by identifying the 4th ${4}^{text{th}}$ and 7th ${7}^{text{th}}$ transformer block layers as primarily responsible for content and style, respectively. Building on this insight, we introduce a novel q-transformer architecture. It features a block-diagonal matrix masked self-attention layer that effectively isolates content and style embeddings by reducing inter-query interference. This design not only enhances disentanglement but also improves training efficiency. Crucially, the learnt image embeddings align well with textual ones, enabling flexible content and style control via images, text or their combinations. SwapDiffusion supports diverse applications such as style transfer (image- or text-driven), image variation, stylised text-to-image generation and multimodal-prompted image synthesis. Experimental results demonstrate that by aligning learnt image embeddings with the U-Net's pre-identified functional layers for content and style, SwapDiffusion achieves superior content-style separation and image quality while offering greater adaptability than existing approaches. The implementation code and pre-trained models will be released at https://github.com/lioo717/SwapDiffusion.

本文介绍了SwapDiffusion,这是一种基于扩散的图像生成中用于内容风格解纠缠的新框架。通过识别第4个${4}^{text{th}}$和,我们提高了对SDXL中扩展文本条件(P +$ mathcal{P}+$)空间的理解${7}^{text{th}}$ transformer块层分别主要负责内容和样式。在此基础上,我们引入了一种新的q-transformer架构。它的特点是一个块对角矩阵掩蔽的自关注层,通过减少查询间的干扰,有效地隔离了内容和样式嵌入。这种设计既提高了解缠,又提高了训练效率。至关重要的是,学习到的图像嵌入与文本嵌入很好地对齐,通过图像、文本或它们的组合实现灵活的内容和样式控制。SwapDiffusion支持多种应用,如风格转换(图像或文本驱动)、图像变化、样式化的文本到图像生成和多模态提示的图像合成。实验结果表明,通过将学习到的图像嵌入与U-Net预先识别的内容和风格功能层对齐,SwapDiffusion实现了卓越的内容风格分离和图像质量,同时提供了比现有方法更大的适应性。实现代码和预训练模型将在https://github.com/lioo717/SwapDiffusion上发布。
{"title":"SwapDiffusion: Flexible Swapping Disentangled Content-Style Embeddings in \u0000 \u0000 \u0000 P\u0000 +\u0000 \u0000 $mathcal{P}+$\u0000 Space for Diffusion Models","authors":"Yongxing He,&nbsp;Zejian Li,&nbsp;Wei Li,&nbsp;Xinlong Zhang,&nbsp;Jia Wei,&nbsp;Yongchuan Tang","doi":"10.1049/cvi2.70048","DOIUrl":"10.1049/cvi2.70048","url":null,"abstract":"<p>This paper introduces SwapDiffusion, a novel framework for content-style disentanglement in diffusion-based image generation. We advance the understanding of the extended textual conditioning (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>P</mi>\u0000 <mo>+</mo>\u0000 </mrow>\u0000 <annotation> $mathcal{P}+$</annotation>\u0000 </semantics></math>) space in SDXL by identifying the <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mn>4</mn>\u0000 <mtext>th</mtext>\u0000 </msup>\u0000 </mrow>\u0000 <annotation> ${4}^{text{th}}$</annotation>\u0000 </semantics></math> and <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mn>7</mn>\u0000 <mtext>th</mtext>\u0000 </msup>\u0000 </mrow>\u0000 <annotation> ${7}^{text{th}}$</annotation>\u0000 </semantics></math> transformer block layers as primarily responsible for content and style, respectively. Building on this insight, we introduce a novel q-transformer architecture. It features a block-diagonal matrix masked self-attention layer that effectively isolates content and style embeddings by reducing inter-query interference. This design not only enhances disentanglement but also improves training efficiency. Crucially, the learnt image embeddings align well with textual ones, enabling flexible content and style control via images, text or their combinations. SwapDiffusion supports diverse applications such as style transfer (image- or text-driven), image variation, stylised text-to-image generation and multimodal-prompted image synthesis. Experimental results demonstrate that by aligning learnt image embeddings with the U-Net's pre-identified functional layers for content and style, SwapDiffusion achieves superior content-style separation and image quality while offering greater adaptability than existing approaches. The implementation code and pre-trained models will be released at https://github.com/lioo717/SwapDiffusion.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70048","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145739422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refining Vision-Based Video Captioning via Object Semantic Prior 通过对象语义先验改进基于视觉的视频字幕
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-07 DOI: 10.1049/cvi2.70049
Wei-Teng Xu, Hong-Bo Zhang, Qing Lei, Jing-Hua Liu, Ji-Xiang Du

This paper presents a novel video captioning method guided by object semantic priors, aimed at improving the performance of vision-based video captioning models. The proposed approach leverages an object detection model to extract semantic representations of objects within image sequences, which are used as prior information to enhance the visual features of the video. During the encoding stage, this prior information is integrated with the video content, enabling a more comprehensive understanding of the visual context. In the decoding stage, the prior information guides the generation of more accurate and contextually appropriate captions. Extensive experiments on the MSVD and MSR-VTT datasets show that the proposed method significantly outperforms existing vision-based video captioning approaches in terms of caption accuracy and relevance. The results validate the effectiveness of incorporating object semantic priors into vision-based models for generating high-quality video captions.

为了提高基于视觉的视频字幕模型的性能,提出了一种基于对象语义先验的视频字幕方法。该方法利用对象检测模型提取图像序列中对象的语义表示,并将其用作先验信息来增强视频的视觉特征。在编码阶段,这些先验信息与视频内容相结合,使人们能够更全面地理解视觉语境。在解码阶段,先验信息指导生成更准确和上下文合适的字幕。在MSVD和MSR-VTT数据集上的大量实验表明,该方法在字幕准确性和相关性方面显著优于现有的基于视觉的视频字幕方法。结果验证了将对象语义先验纳入基于视觉的模型以生成高质量视频字幕的有效性。
{"title":"Refining Vision-Based Video Captioning via Object Semantic Prior","authors":"Wei-Teng Xu,&nbsp;Hong-Bo Zhang,&nbsp;Qing Lei,&nbsp;Jing-Hua Liu,&nbsp;Ji-Xiang Du","doi":"10.1049/cvi2.70049","DOIUrl":"10.1049/cvi2.70049","url":null,"abstract":"<p>This paper presents a novel video captioning method guided by object semantic priors, aimed at improving the performance of vision-based video captioning models. The proposed approach leverages an object detection model to extract semantic representations of objects within image sequences, which are used as prior information to enhance the visual features of the video. During the encoding stage, this prior information is integrated with the video content, enabling a more comprehensive understanding of the visual context. In the decoding stage, the prior information guides the generation of more accurate and contextually appropriate captions. Extensive experiments on the MSVD and MSR-VTT datasets show that the proposed method significantly outperforms existing vision-based video captioning approaches in terms of caption accuracy and relevance. The results validate the effectiveness of incorporating object semantic priors into vision-based models for generating high-quality video captions.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145739658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Forgery Detection Based on Mask and Frequency Diffusion Reconstruction 基于掩模和频率扩散重建的人脸伪造检测
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-22 DOI: 10.1049/cvi2.70046
Yanhan Peng, Xin Liu, Fengbiao Zan, Jian Yu

The field of face forgery detection continues to encounter significant challenges in achieving generalisation, and the rapid advancement of generative models particularly diffusion models has further intensified this problem. To tackle these challenges, we propose a novel detection framework that integrates spatial central masking and frequency-enriched diffusion reconstruction (MFDR), thereby enhancing both local detail reconstruction accuracy and global structural recovery. Specifically, during data preprocessing, we apply central masking to reconstruct the original image. The detector learns pixel-level discrepancies between the reconstructed masked regions and the corresponding original regions, which improves sensitivity to reconstruction errors and guides the model to focus more effectively on localised artefact detection. At the frequency-domain level, our proposed frequency-enhanced diffusion module explicitly optimises residual reconstruction in both low- and high-frequency subbands, effectively improving global structural recovery and preserving high-frequency detail fidelity. This, in turn, strengthens the model's capacity to capture forgery traces. Furthermore, during training, we introduce a contrastive learning strategy in which real images processed through masked diffusion and frequency reconstruction are used as positive samples. This design enables the detector to jointly perceive spatial reconstruction errors and preserve frequency-domain texture fidelity, thereby significantly enhancing its ability to detect subtle forgery artefacts. Experimental results show that our method achieves superior performance in detecting face images generated by various diffusion models (e.g., DDPM, LDM) and surpasses the diffusion reconstruction contrastive training (DRCT) baseline.

人脸伪造检测领域在实现泛化方面继续面临重大挑战,而生成模型特别是扩散模型的快速发展进一步加剧了这一问题。为了解决这些挑战,我们提出了一种新的检测框架,该框架集成了空间中心掩蔽和富频扩散重建(MFDR),从而提高了局部细节重建精度和全局结构恢复。具体来说,在数据预处理过程中,我们采用中心掩蔽来重建原始图像。检测器学习重建的掩膜区域与相应原始区域之间的像素级差异,提高了对重建错误的灵敏度,并指导模型更有效地专注于局部伪像检测。在频域层面,我们提出的频率增强扩散模块明确优化了低频和高频子带的残差重建,有效地提高了全局结构恢复并保持了高频细节保真度。这反过来又增强了模型捕捉伪造痕迹的能力。此外,在训练过程中,我们引入了一种对比学习策略,该策略使用经过掩模扩散和频率重建处理的真实图像作为正样本。该设计使检测器能够同时感知空间重构误差和保持频域纹理保真度,从而显著提高其检测细微伪造物的能力。实验结果表明,该方法在检测各种扩散模型(如DDPM、LDM)生成的人脸图像方面取得了优异的性能,并超越了扩散重建对比训练(diffusion reconstruction contrast training, DRCT)基线。
{"title":"Facial Forgery Detection Based on Mask and Frequency Diffusion Reconstruction","authors":"Yanhan Peng,&nbsp;Xin Liu,&nbsp;Fengbiao Zan,&nbsp;Jian Yu","doi":"10.1049/cvi2.70046","DOIUrl":"https://doi.org/10.1049/cvi2.70046","url":null,"abstract":"<p>The field of face forgery detection continues to encounter significant challenges in achieving generalisation, and the rapid advancement of generative models particularly diffusion models has further intensified this problem. To tackle these challenges, we propose a novel detection framework that integrates spatial central masking and frequency-enriched diffusion reconstruction (MFDR), thereby enhancing both local detail reconstruction accuracy and global structural recovery. Specifically, during data preprocessing, we apply central masking to reconstruct the original image. The detector learns pixel-level discrepancies between the reconstructed masked regions and the corresponding original regions, which improves sensitivity to reconstruction errors and guides the model to focus more effectively on localised artefact detection. At the frequency-domain level, our proposed frequency-enhanced diffusion module explicitly optimises residual reconstruction in both low- and high-frequency subbands, effectively improving global structural recovery and preserving high-frequency detail fidelity. This, in turn, strengthens the model's capacity to capture forgery traces. Furthermore, during training, we introduce a contrastive learning strategy in which real images processed through masked diffusion and frequency reconstruction are used as positive samples. This design enables the detector to jointly perceive spatial reconstruction errors and preserve frequency-domain texture fidelity, thereby significantly enhancing its ability to detect subtle forgery artefacts. Experimental results show that our method achieves superior performance in detecting face images generated by various diffusion models (e.g., DDPM, LDM) and surpasses the diffusion reconstruction contrastive training (DRCT) baseline.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145581364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust 2D/3D Alignment With Enhanced NeRF 3D Reconstruction and Causal Feature Fusion 鲁棒2D/3D对齐增强NeRF 3D重建和因果特征融合
IF 1.3 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-19 DOI: 10.1049/cvi2.70045
Jie Lin, Yi Bai, Yupei Deng, Bing Hu, Lifan Zhang

This paper proposes a unified framework integrating enhanced neural radiance fields (NeRF) with causal feature fusion to tackle 3D reconstruction and 2D/3D alignment challenges in complex scenes. For example, in 3D reconstruction, explicit representations have low reconstruction quality, whereas implicit ones have slow reconstruction speed; 2D/3D matching lacks effective information fusion; furthermore, existing 3D reconstruction methods fail to provide complementary information for alignment and relying solely on 2D alignment is susceptible to background interference. In order to improve 2D/3D alignment accuracy, we propose a holistic alignment architecture, including a combined implicit and explicit 3D reconstruction method, capable of constructing higher-quality 3D scenes and, crucially, generating richer features, such as voxel density information and colour information, to provide better feature complements for background robustness. Meanwhile, we construct 2D causal features and utilise the features for fusion and achieve more robust alignment through multidimensional anti-interference feature computation. Extensive experiments validate our framework on both public benchmarks and specialised domains. In medical endoscopy, the system assists surgeons by providing real-time 3D contextual guidance, reducing procedural risks. Quantitative results show superior performance over state-of-the-art methods. The proposed technology demonstrates broad applicability in scenarios demanding robust scene understanding.

本文提出了一种集成增强神经辐射场(enhanced neural radiance fields, NeRF)和因果特征融合的统一框架,以解决复杂场景下的3D重建和2D/3D对齐问题。例如,在三维重建中,显式表征的重建质量较低,而隐式表征的重建速度较慢;2D/3D匹配缺乏有效的信息融合;此外,现有的三维重建方法无法为对准提供互补信息,单纯依赖二维对准容易受到背景干扰。为了提高2D/3D对齐精度,我们提出了一种整体对齐架构,包括隐式和显式3D重建方法相结合,能够构建更高质量的3D场景,并且至关重要的是,生成更丰富的特征,如体素密度信息和颜色信息,为背景鲁棒性提供更好的特征补充。同时,构建二维因果特征并利用特征进行融合,通过多维抗干扰特征计算实现更稳健的对齐。广泛的实验在公共基准和专门领域验证了我们的框架。在医学内窥镜检查中,该系统通过提供实时3D上下文指导来帮助外科医生,降低手术风险。定量结果显示优于最先进的方法。所提出的技术在需要强大的场景理解的场景中具有广泛的适用性。
{"title":"Robust 2D/3D Alignment With Enhanced NeRF 3D Reconstruction and Causal Feature Fusion","authors":"Jie Lin,&nbsp;Yi Bai,&nbsp;Yupei Deng,&nbsp;Bing Hu,&nbsp;Lifan Zhang","doi":"10.1049/cvi2.70045","DOIUrl":"https://doi.org/10.1049/cvi2.70045","url":null,"abstract":"<p>This paper proposes a unified framework integrating enhanced neural radiance fields (NeRF) with causal feature fusion to tackle 3D reconstruction and 2D/3D alignment challenges in complex scenes. For example, in 3D reconstruction, explicit representations have low reconstruction quality, whereas implicit ones have slow reconstruction speed; 2D/3D matching lacks effective information fusion; furthermore, existing 3D reconstruction methods fail to provide complementary information for alignment and relying solely on 2D alignment is susceptible to background interference. In order to improve 2D/3D alignment accuracy, we propose a holistic alignment architecture, including a combined implicit and explicit 3D reconstruction method, capable of constructing higher-quality 3D scenes and, crucially, generating richer features, such as voxel density information and colour information, to provide better feature complements for background robustness. Meanwhile, we construct 2D causal features and utilise the features for fusion and achieve more robust alignment through multidimensional anti-interference feature computation. Extensive experiments validate our framework on both public benchmarks and specialised domains. In medical endoscopy, the system assists surgeons by providing real-time 3D contextual guidance, reducing procedural risks. Quantitative results show superior performance over state-of-the-art methods. The proposed technology demonstrates broad applicability in scenarios demanding robust scene understanding.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1