首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
A Single-Frame Deflectometry Method for Online Inspection of Light-Transmitting Components 用于在线检测透光元件的单帧偏转测量法。
Ning Yan;Dongxue Wang;Lei Liu;Zhuotong Li;Shuaipeng Yuan;Xiaodong Zhang
Transparent materials are widely used in industrial applications, such as construction, transportation, and optics. However, the complex optical properties of these materials make it difficult to achieve precise surface form measurements, especially for bulk surface form inspection in industrial environments. Traditional structured light-based measurement methods often struggle with suboptimal signal-to-noise ratios, making them ineffective. Currently, there is a lack of efficient techniques for real-time inspection of such components. This paper proposes a single-frame measurement technique based on deflectometry for large-size transparent surfaces. It utilizes the reflective characteristics of the measured surface, making it independent of the surface’s diffuse reflection properties. This fundamentally solves the issues associated with signal-to-noise ratios. By discretizing the phase map, it separates the multiple surface reflection characteristics of transparent devices, enabling transparent device measurement. To meet the requirements of industrial dynamic measurement, this technique only needs a simple and low-cost system structure, which contains just two cameras for image capture. It does not require phase shifting to complete the measurement, making it independent of the screen and having the potential for larger surface measurement. The proposed method was used to measure a 400mm aperture automobile glass, and the results showed that it is able to achieve a measurement accuracy on the order of $10~mu $ m. The method proposed in this paper overcomes the influence of surface reflection on transparent objects and significantly improves the efficiency and accuracy of large-sized transparent surface measurements by using a single-frame image measurement. Moreover, this method shows promise for broader applications, including measurements of lenses and HUD (Heads-Up Display) components, showcasing significant potential for industrial applications.
透明材料广泛应用于建筑、运输和光学等工业领域。然而,由于这些材料具有复杂的光学特性,因此很难实现精确的表面形状测量,尤其是在工业环境中进行批量表面形状检测时。传统的基于结构光的测量方法往往难以达到最佳信噪比,因此效果不佳。目前,还缺乏对此类部件进行实时检测的高效技术。本文针对大尺寸透明表面提出了一种基于偏转测量的单帧测量技术。它利用被测表面的反射特性,使其与表面的漫反射特性无关。这从根本上解决了与信噪比相关的问题。通过将相位图离散化,它可以分离透明设备的多种表面反射特性,从而实现透明设备的测量。为了满足工业动态测量的要求,该技术只需要一个简单、低成本的系统结构,其中包含两个用于图像捕捉的摄像头。它不需要相移来完成测量,因此不受屏幕的影响,并有可能实现更大的表面测量。本文提出的方法被用于测量 400 毫米孔径的汽车玻璃,结果表明其测量精度可达 10 μm。本文提出的方法克服了透明物体表面反射的影响,通过使用单帧图像测量,显著提高了大型透明表面测量的效率和精度。此外,该方法有望应用于更广泛的领域,包括透镜和 HUD(抬头显示器)组件的测量,为工业应用展示了巨大的潜力。
{"title":"A Single-Frame Deflectometry Method for Online Inspection of Light-Transmitting Components","authors":"Ning Yan;Dongxue Wang;Lei Liu;Zhuotong Li;Shuaipeng Yuan;Xiaodong Zhang","doi":"10.1109/TIP.2024.3414145","DOIUrl":"10.1109/TIP.2024.3414145","url":null,"abstract":"Transparent materials are widely used in industrial applications, such as construction, transportation, and optics. However, the complex optical properties of these materials make it difficult to achieve precise surface form measurements, especially for bulk surface form inspection in industrial environments. Traditional structured light-based measurement methods often struggle with suboptimal signal-to-noise ratios, making them ineffective. Currently, there is a lack of efficient techniques for real-time inspection of such components. This paper proposes a single-frame measurement technique based on deflectometry for large-size transparent surfaces. It utilizes the reflective characteristics of the measured surface, making it independent of the surface’s diffuse reflection properties. This fundamentally solves the issues associated with signal-to-noise ratios. By discretizing the phase map, it separates the multiple surface reflection characteristics of transparent devices, enabling transparent device measurement. To meet the requirements of industrial dynamic measurement, this technique only needs a simple and low-cost system structure, which contains just two cameras for image capture. It does not require phase shifting to complete the measurement, making it independent of the screen and having the potential for larger surface measurement. The proposed method was used to measure a 400mm aperture automobile glass, and the results showed that it is able to achieve a measurement accuracy on the order of \u0000<inline-formula> <tex-math>$10~mu $ </tex-math></inline-formula>\u0000m. The method proposed in this paper overcomes the influence of surface reflection on transparent objects and significantly improves the efficiency and accuracy of large-sized transparent surface measurements by using a single-frame image measurement. Moreover, this method shows promise for broader applications, including measurements of lenses and HUD (Heads-Up Display) components, showcasing significant potential for industrial applications.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141428584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learnable Feature Augmentation Framework for Temporal Action Localization 用于时态动作定位的可学习特征增强框架
Yepeng Tang;Weining Wang;Chunjie Zhang;Jing Liu;Yao Zhao
Temporal action localization (TAL) has drawn much attention in recent years, however, the performance of previous methods is still far from satisfactory due to the lack of annotated untrimmed video data. To deal with this issue, we propose to improve the utilization of current data through feature augmentation. Given an input video, we first extract video features with pre-trained video encoders, and then randomly mask various semantic contents of video features to consider different views of video features. To avoid damaging important action-related semantic information, we further develop a learnable feature augmentation framework to generate better views of videos. In particular, a Mask-based Feature Augmentation Module (MFAM) is proposed. The MFAM has three advantages: 1) it captures the temporal and semantic relationships of original video features, 2) it generates masked features with indispensable action-related information, and 3) it randomly recycles some masked information to ensure diversity. Finally, we input the masked features and the original features into shared action detectors respectively, and perform action classification and localization jointly for model learning. The proposed framework can improve the robustness and generalization of action detectors by learning more and better views of videos. In the testing stage, the MFAM can be removed, which does not bring extra computational costs. Extensive experiments are conducted on four TAL benchmark datasets. Our proposed framework significantly improves different TAL models and achieves the state-of-the-art performances.
近年来,时态动作定位(TAL)引起了广泛关注,然而,由于缺乏带注释的未修剪视频数据,以往方法的性能还远远不能令人满意。为了解决这个问题,我们建议通过特征增强来提高当前数据的利用率。对于输入视频,我们首先使用预先训练好的视频编码器提取视频特征,然后随机屏蔽视频特征的各种语义内容,以考虑视频特征的不同视角。为了避免破坏重要的动作相关语义信息,我们进一步开发了一个可学习的特征增强框架,以生成更好的视频视图。我们特别提出了基于掩码的特征增强模块(MFAM)。MFAM 有三个优点:1) 它能捕捉原始视频特征的时间和语义关系;2) 它能生成具有不可或缺的动作相关信息的掩码特征;3) 它能随机回收一些掩码信息以确保多样性。最后,我们将屏蔽特征和原始特征分别输入共享动作检测器,并联合执行动作分类和定位以进行模型学习。所提出的框架可以通过学习更多更好的视频视图来提高动作检测器的鲁棒性和泛化能力。在测试阶段,可以移除 MFAM,这不会带来额外的计算成本。我们在四个 TAL 基准数据集上进行了广泛的实验。我们提出的框架大大改进了不同的 TAL 模型,并达到了最先进的性能。
{"title":"Learnable Feature Augmentation Framework for Temporal Action Localization","authors":"Yepeng Tang;Weining Wang;Chunjie Zhang;Jing Liu;Yao Zhao","doi":"10.1109/TIP.2024.3413599","DOIUrl":"10.1109/TIP.2024.3413599","url":null,"abstract":"Temporal action localization (TAL) has drawn much attention in recent years, however, the performance of previous methods is still far from satisfactory due to the lack of annotated untrimmed video data. To deal with this issue, we propose to improve the utilization of current data through feature augmentation. Given an input video, we first extract video features with pre-trained video encoders, and then randomly mask various semantic contents of video features to consider different views of video features. To avoid damaging important action-related semantic information, we further develop a learnable feature augmentation framework to generate better views of videos. In particular, a Mask-based Feature Augmentation Module (MFAM) is proposed. The MFAM has three advantages: 1) it captures the temporal and semantic relationships of original video features, 2) it generates masked features with indispensable action-related information, and 3) it randomly recycles some masked information to ensure diversity. Finally, we input the masked features and the original features into shared action detectors respectively, and perform action classification and localization jointly for model learning. The proposed framework can improve the robustness and generalization of action detectors by learning more and better views of videos. In the testing stage, the MFAM can be removed, which does not bring extra computational costs. Extensive experiments are conducted on four TAL benchmark datasets. Our proposed framework significantly improves different TAL models and achieves the state-of-the-art performances.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware Active Domain Adaptive Salient Object Detection. 不确定性感知主动域自适应突出物体检测。
Guanbin Li, Chaowei Fang, Zhuohua Chen, Mingzhi Mao, Liang Lin

Due to the advancement of deep learning, the performance of salient object detection (SOD) has been significantly improved. However, deep learning-based techniques require a sizable amount of pixel-wise annotations. To relieve the burden of data annotation, a variety of deep weakly-supervised and unsupervised SOD methods have been proposed, yet the performance gap between them and fully supervised methods remains significant. In this paper, we propose a novel, cost-efficient salient object detection framework, which can adapt models from synthetic data to real-world data with the help of a limited number of actively selected annotations. Specifically, we first construct a synthetic SOD dataset by copying and pasting foreground objects into pure background images. With the masks of foreground objects taken as the ground-truth saliency maps, this dataset can be used for training the SOD model initially. However, due to the large domain gap between synthetic images and real-world images, the performance of the initially trained model on the real-world images is deficient. To transfer the model from the synthetic dataset to the real-world datasets, we further design an uncertainty-aware active domain adaptive algorithm to generate labels for the real-world target images. The prediction variances against data augmentations are utilized to calculate the superpixel-level uncertainty values. For those superpixels with relatively low uncertainty, we directly generate pseudo labels according to the network predictions. Meanwhile, we select a few superpixels with high uncertainty scores and assign labels to them manually. This labeling strategy is capable of generating high-quality labels without incurring too much annotation cost. Experimental results on six benchmark SOD datasets demonstrate that our method outperforms the existing state-of-the-art weakly-supervised and unsupervised SOD methods and is even comparable to the fully supervised ones. Code will be released at: https://github.com/czh-3/UADA.

随着深度学习技术的发展,突出物体检测(SOD)的性能得到了显著提高。然而,基于深度学习的技术需要大量的像素标注。为了减轻数据注释的负担,人们提出了多种深度弱监督和无监督 SOD 方法,但它们与完全监督方法之间的性能差距仍然很大。在本文中,我们提出了一种新颖、低成本高效率的突出对象检测框架,它可以借助有限数量的主动选择注释,将合成数据中的模型调整到真实世界的数据中。具体来说,我们首先通过将前景物体复制并粘贴到纯背景图像中来构建合成 SOD 数据集。将前景物体的遮罩作为地面实况显著性图,该数据集可用于初始 SOD 模型的训练。然而,由于合成图像与真实世界图像之间存在较大的域差距,最初训练的模型在真实世界图像上的性能存在缺陷。为了将模型从合成数据集转移到真实世界数据集,我们进一步设计了一种不确定性感知的主动域自适应算法,以生成真实世界目标图像的标签。利用数据增强的预测差异来计算超像素级的不确定性值。对于不确定性相对较低的超像素,我们直接根据网络预测生成伪标签。同时,我们会选择一些不确定性得分较高的超像素,并手动为其分配标签。这种标注策略既能生成高质量的标签,又不会产生过多的标注成本。在六个基准 SOD 数据集上的实验结果表明,我们的方法优于现有的最先进的弱监督和无监督 SOD 方法,甚至可与完全监督方法相媲美。代码将在 https://github.com/czh-3/UADA 发布。
{"title":"Uncertainty-aware Active Domain Adaptive Salient Object Detection.","authors":"Guanbin Li, Chaowei Fang, Zhuohua Chen, Mingzhi Mao, Liang Lin","doi":"10.1109/TIP.2024.3413598","DOIUrl":"10.1109/TIP.2024.3413598","url":null,"abstract":"<p><p>Due to the advancement of deep learning, the performance of salient object detection (SOD) has been significantly improved. However, deep learning-based techniques require a sizable amount of pixel-wise annotations. To relieve the burden of data annotation, a variety of deep weakly-supervised and unsupervised SOD methods have been proposed, yet the performance gap between them and fully supervised methods remains significant. In this paper, we propose a novel, cost-efficient salient object detection framework, which can adapt models from synthetic data to real-world data with the help of a limited number of actively selected annotations. Specifically, we first construct a synthetic SOD dataset by copying and pasting foreground objects into pure background images. With the masks of foreground objects taken as the ground-truth saliency maps, this dataset can be used for training the SOD model initially. However, due to the large domain gap between synthetic images and real-world images, the performance of the initially trained model on the real-world images is deficient. To transfer the model from the synthetic dataset to the real-world datasets, we further design an uncertainty-aware active domain adaptive algorithm to generate labels for the real-world target images. The prediction variances against data augmentations are utilized to calculate the superpixel-level uncertainty values. For those superpixels with relatively low uncertainty, we directly generate pseudo labels according to the network predictions. Meanwhile, we select a few superpixels with high uncertainty scores and assign labels to them manually. This labeling strategy is capable of generating high-quality labels without incurring too much annotation cost. Experimental results on six benchmark SOD datasets demonstrate that our method outperforms the existing state-of-the-art weakly-supervised and unsupervised SOD methods and is even comparable to the fully supervised ones. Code will be released at: https://github.com/czh-3/UADA.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141422267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMU-Assisted Accurate Blur Kernel Re-Estimation in Non-Uniform Camera Shake Deblurring 非均匀相机抖动去模糊中的 IMU 辅助精确模糊内核再估计。
Jianxiang Rong;Hua Huang;Jia Li
Image deblurring for camera shake is a highly regarded problem in the field of computer vision. A promising solution is the patch-wise non-uniform image deblurring algorithms, where a linear transformation model is typically established between different blur kernels to re-estimate poorly estimated blur kernels. However, the linear model struggles to effectively describe the nonlinear transformation relationships between blur kernels. A key observation is that the inertial measurement unit (IMU) provides motion data of the camera, which is helpful in describing the landmarks of the blur kernel. This paper presents a new IMU-assisted method for the re-estimation of poorly estimated blur kernels. This method establishes a nonlinear transformation relationship model between blur kernels of different patches using IMU motion data. Subsequently, an optimization problem is applied to re-estimate poorly estimated blur kernels by incorporating this relationship model with neighboring well-estimated kernels. Experimental results demonstrate that this blur kernel re-estimation method outperforms existing methods.
针对相机抖动的图像去模糊是计算机视觉领域备受关注的问题。一种很有前景的解决方案是补丁式非均匀图像去模糊算法,该算法通常在不同的模糊核之间建立线性变换模型,以重新估计估计不准确的模糊核。然而,线性模型难以有效描述模糊核之间的非线性变换关系。一个重要的观察结果是,惯性测量单元(IMU)提供了相机的运动数据,这有助于描述模糊核的地标。本文提出了一种新的惯性测量单元辅助方法,用于重新估计估计不准确的模糊核。该方法利用 IMU 运动数据在不同光斑的模糊核之间建立了一个非线性变换关系模型。随后,应用优化问题,通过将该关系模型与邻近的估计良好的模糊核结合起来,重新估计估计不佳的模糊核。实验结果表明,这种模糊核重新估计方法优于现有方法。
{"title":"IMU-Assisted Accurate Blur Kernel Re-Estimation in Non-Uniform Camera Shake Deblurring","authors":"Jianxiang Rong;Hua Huang;Jia Li","doi":"10.1109/TIP.2024.3411819","DOIUrl":"10.1109/TIP.2024.3411819","url":null,"abstract":"Image deblurring for camera shake is a highly regarded problem in the field of computer vision. A promising solution is the patch-wise non-uniform image deblurring algorithms, where a linear transformation model is typically established between different blur kernels to re-estimate poorly estimated blur kernels. However, the linear model struggles to effectively describe the nonlinear transformation relationships between blur kernels. A key observation is that the inertial measurement unit (IMU) provides motion data of the camera, which is helpful in describing the landmarks of the blur kernel. This paper presents a new IMU-assisted method for the re-estimation of poorly estimated blur kernels. This method establishes a nonlinear transformation relationship model between blur kernels of different patches using IMU motion data. Subsequently, an optimization problem is applied to re-estimate poorly estimated blur kernels by incorporating this relationship model with neighboring well-estimated kernels. Experimental results demonstrate that this blur kernel re-estimation method outperforms existing methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141322191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Label Adversarial Attack With New Measures and Self-Paced Constraint Weighting 采用新措施和自定步调约束加权的多标签对抗攻击
Fengguang Su;Ou Wu;Weiyao Zhu
An adversarial attack is typically implemented by solving a constrained optimization problem. In top-k adversarial attacks implementation for multi-label learning, the attack failure degree (AFD) and attack cost (AC) of a possible attack are major concerns. According to our experimental and theoretical analysis, existing methods are negatively impacted by the coarse measures for AFD/AC and the indiscriminate treatment for all constraints, particularly when there is no ideal solution. Hence, this study first develops a refined measure based on the Jaccard index appropriate for AFD and AC, distinguishing the failure degrees/costs of two possible attacks better than the existing indicator function-based scheme. Furthermore, we formulate novel optimization problems with the least constraint violation via new measures for AFD and AC, and theoretically demonstrate the effectiveness of weighting slack variables for constraints. Finally, a self-paced weighting strategy is proposed to assign different priorities to various constraints during optimization, resulting in larger attack gains compared to previous indiscriminate schemes. Meanwhile, our method avoids fluctuations during optimization, especially in the presence of highly conflicting constraints. Extensive experiments on four benchmark datasets validate the effectiveness of our method across different evaluation metrics.
对抗攻击通常是通过求解一个约束优化问题来实现的。在多标签学习的 top-k 对抗攻击实施中,可能攻击的攻击失败度(AFD)和攻击成本(AC)是主要关注点。根据我们的实验和理论分析,现有的方法由于对 AFD/AC 的粗略度量和对所有约束的无差别处理而受到负面影响,尤其是在没有理想解的情况下。因此,本研究首先开发了一种基于 Jaccard 指数、适合 AFD 和 AC 的精细度量方法,与现有的基于指标函数的方案相比,它能更好地区分两种可能攻击的失败程度/成本。此外,我们还通过新的 AFD 和 AC 度量,提出了最少违反约束条件的新型优化问题,并从理论上证明了为约束条件的松弛变量加权的有效性。最后,我们提出了一种自定步调的加权策略,在优化过程中为各种约束分配不同的优先级,与之前的无差别方案相比,攻击收益更大。同时,我们的方法避免了优化过程中的波动,尤其是在存在高度冲突的约束条件时。在四个基准数据集上进行的广泛实验验证了我们的方法在不同评价指标上的有效性。
{"title":"Multi-Label Adversarial Attack With New Measures and Self-Paced Constraint Weighting","authors":"Fengguang Su;Ou Wu;Weiyao Zhu","doi":"10.1109/TIP.2024.3411927","DOIUrl":"10.1109/TIP.2024.3411927","url":null,"abstract":"An adversarial attack is typically implemented by solving a constrained optimization problem. In top-k adversarial attacks implementation for multi-label learning, the attack failure degree (AFD) and attack cost (AC) of a possible attack are major concerns. According to our experimental and theoretical analysis, existing methods are negatively impacted by the coarse measures for AFD/AC and the indiscriminate treatment for all constraints, particularly when there is no ideal solution. Hence, this study first develops a refined measure based on the Jaccard index appropriate for AFD and AC, distinguishing the failure degrees/costs of two possible attacks better than the existing indicator function-based scheme. Furthermore, we formulate novel optimization problems with the least constraint violation via new measures for AFD and AC, and theoretically demonstrate the effectiveness of weighting slack variables for constraints. Finally, a self-paced weighting strategy is proposed to assign different priorities to various constraints during optimization, resulting in larger attack gains compared to previous indiscriminate schemes. Meanwhile, our method avoids fluctuations during optimization, especially in the presence of highly conflicting constraints. Extensive experiments on four benchmark datasets validate the effectiveness of our method across different evaluation metrics.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141322232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ECEA: Extensible Co-Existing Attention for Few-Shot Object Detection. ECEA:可扩展的共存注意力,用于少镜头物体检测。
Zhimeng Xin, Tianxu Wu, Shiming Chen, Yixiong Zou, Ling Shao, Xinge You

Few-shot object detection (FSOD) identifies objects from extremely few annotated samples. Most existing FSOD methods, recently, apply the two-stage learning paradigm, which transfers the knowledge learned from abundant base classes to assist the few-shot detectors by learning the global features. However, such existing FSOD approaches seldom consider the localization of objects from local to global. Limited by the scarce training data in FSOD, the training samples of novel classes typically capture part of objects, resulting in such FSOD methods being unable to detect the completely unseen object during testing. To tackle this problem, we propose an Extensible Co-Existing Attention (ECEA) module to enable the model to infer the global object according to the local parts. Specifically, we first devise an extensible attention mechanism that starts with a local region and extends attention to co-existing regions that are similar and adjacent to the given local region. We then implement the extensible attention mechanism in different feature scales to progressively discover the full object in various receptive fields. In the training process, the model learns the extensible ability on the base stage with abundant samples and transfers it to the novel stage of continuous extensible learning, which can assist the few-shot model to quickly adapt in extending local regions to co-existing regions. Extensive experiments on the PASCAL VOC and COCO datasets show that our ECEA module can assist the few-shot detector to completely predict the object despite some regions failing to appear in the training samples and achieve the new state-of-the-art compared with existing FSOD methods. Code is released at https://github.com/zhimengXin/ECEA.

少量对象检测(FSOD)可从极少的注释样本中识别对象。最近,现有的大多数 FSOD 方法都采用了两阶段学习模式,即通过学习全局特征,将从丰富的基础类中学习到的知识转移到少数几个检测器中。然而,这些现有的 FSOD 方法很少考虑物体从局部到全局的定位。受限于 FSOD 中稀缺的训练数据,新类别的训练样本通常只能捕捉到部分物体,导致此类 FSOD 方法在测试过程中无法检测到完全未见的物体。为了解决这个问题,我们提出了一个可扩展的共存注意力(ECEA)模块,使模型能够根据局部来推断全局对象。具体来说,我们首先设计了一种可扩展的注意力机制,该机制从局部区域开始,将注意力扩展到与给定局部区域相似且相邻的共存区域。然后,我们在不同的特征尺度上实施可扩展的关注机制,以逐步发现不同感受野中的完整对象。在训练过程中,模型会在样本丰富的基础阶段学习可扩展能力,并将其转移到持续可扩展学习的新阶段,这可以帮助少镜头模型快速适应将局部区域扩展到共存区域的过程。在 PASCAL VOC 和 COCO 数据集上进行的大量实验表明,尽管有些区域没有出现在训练样本中,我们的 ECEA 模块仍能帮助 few-shot 检测器完全预测对象,与现有的 FSOD 方法相比达到了新的先进水平。代码发布于 https://github.com/zhimengXin/ECEA。
{"title":"ECEA: Extensible Co-Existing Attention for Few-Shot Object Detection.","authors":"Zhimeng Xin, Tianxu Wu, Shiming Chen, Yixiong Zou, Ling Shao, Xinge You","doi":"10.1109/TIP.2024.3411771","DOIUrl":"10.1109/TIP.2024.3411771","url":null,"abstract":"<p><p>Few-shot object detection (FSOD) identifies objects from extremely few annotated samples. Most existing FSOD methods, recently, apply the two-stage learning paradigm, which transfers the knowledge learned from abundant base classes to assist the few-shot detectors by learning the global features. However, such existing FSOD approaches seldom consider the localization of objects from local to global. Limited by the scarce training data in FSOD, the training samples of novel classes typically capture part of objects, resulting in such FSOD methods being unable to detect the completely unseen object during testing. To tackle this problem, we propose an Extensible Co-Existing Attention (ECEA) module to enable the model to infer the global object according to the local parts. Specifically, we first devise an extensible attention mechanism that starts with a local region and extends attention to co-existing regions that are similar and adjacent to the given local region. We then implement the extensible attention mechanism in different feature scales to progressively discover the full object in various receptive fields. In the training process, the model learns the extensible ability on the base stage with abundant samples and transfers it to the novel stage of continuous extensible learning, which can assist the few-shot model to quickly adapt in extending local regions to co-existing regions. Extensive experiments on the PASCAL VOC and COCO datasets show that our ECEA module can assist the few-shot detector to completely predict the object despite some regions failing to appear in the training samples and achieve the new state-of-the-art compared with existing FSOD methods. Code is released at https://github.com/zhimengXin/ECEA.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141322190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Angular Isotonic Loss Guided Multi-Layer Integration for Few-Shot Fine-Grained Image Classification 角度等效损失引导的多层整合,用于少镜头精细图像分类
Li-Jun Zhao;Zhen-Duo Chen;Zhen-Xiang Ma;Xin Luo;Xin-Shun Xu
Recent research on few-shot fine-grained image classification (FSFG) has predominantly focused on extracting discriminative features. The limited attention paid to the role of loss functions has resulted in weaker preservation of similarity relationships between query and support instances, thereby potentially limiting the performance of FSFG. In this regard, we analyze the limitations of widely adopted cross-entropy loss and introduce a novel Angular ISotonic (AIS) loss. The AIS loss introduces an angular margin to constrain the prototypes to maintain a certain distance from a pre-set threshold. It guides the model to converge more stably, learn clearer boundaries among highly similar classes, and achieve higher accuracy faster with limited instances. Moreover, to better accommodate the feature requirements of the AIS loss and fully exploit its potential in FSFG, we propose a Multi-Layer Integration (MLI) network that captures object features from multiple perspectives to provide more comprehensive and informative representations of the input images. Extensive experiments demonstrate the effectiveness of our proposed method on four standard fine-grained benchmarks. Codes are available at: https://github.com/Legenddddd/AIS-MLI.
近年来,有关少镜头精细图像分类(FSFG)的研究主要集中在提取判别特征上。由于对损失函数作用的关注有限,导致查询和支持实例之间的相似性关系保存较弱,从而可能限制 FSFG 的性能。为此,我们分析了广泛采用的交叉熵损失的局限性,并引入了一种新颖的角度等效损失(AIS)。AIS 损失引入了一个角度余量,以限制原型与预设阈值保持一定距离。它能引导模型更稳定地收敛,在高度相似的类别中学习到更清晰的边界,并在有限的实例中更快地获得更高的准确率。此外,为了更好地适应 AIS 损失对特征的要求,并充分发挥其在 FSFG 中的潜力,我们提出了一种多层整合(MLI)网络,它能从多个角度捕捉物体特征,从而为输入图像提供更全面、更丰富的表征。广泛的实验证明了我们提出的方法在四个标准细粒度基准上的有效性。代码见:https://github.com/Legenddddd/AIS-MLI。
{"title":"Angular Isotonic Loss Guided Multi-Layer Integration for Few-Shot Fine-Grained Image Classification","authors":"Li-Jun Zhao;Zhen-Duo Chen;Zhen-Xiang Ma;Xin Luo;Xin-Shun Xu","doi":"10.1109/TIP.2024.3411474","DOIUrl":"10.1109/TIP.2024.3411474","url":null,"abstract":"Recent research on few-shot fine-grained image classification (FSFG) has predominantly focused on extracting discriminative features. The limited attention paid to the role of loss functions has resulted in weaker preservation of similarity relationships between query and support instances, thereby potentially limiting the performance of FSFG. In this regard, we analyze the limitations of widely adopted cross-entropy loss and introduce a novel Angular ISotonic (AIS) loss. The AIS loss introduces an angular margin to constrain the prototypes to maintain a certain distance from a pre-set threshold. It guides the model to converge more stably, learn clearer boundaries among highly similar classes, and achieve higher accuracy faster with limited instances. Moreover, to better accommodate the feature requirements of the AIS loss and fully exploit its potential in FSFG, we propose a Multi-Layer Integration (MLI) network that captures object features from multiple perspectives to provide more comprehensive and informative representations of the input images. Extensive experiments demonstrate the effectiveness of our proposed method on four standard fine-grained benchmarks. Codes are available at: \u0000<uri>https://github.com/Legenddddd/AIS-MLI</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
E-Calib: A Fast, Robust, and Accurate Calibration Toolbox for Event Cameras E-Calib:用于事件摄像机的快速、稳健、精确校准工具箱。
Mohammed Salah;Abdulla Ayyad;Muhammad Humais;Daniel Gehrig;Abdelqader Abusafieh;Lakmal Seneviratne;Davide Scaramuzza;Yahya Zweiri
Event cameras triggered a paradigm shift in the computer vision community delineated by their asynchronous nature, low latency, and high dynamic range. Calibration of event cameras is always essential to account for the sensor intrinsic parameters and for 3D perception. However, conventional image-based calibration techniques are not applicable due to the asynchronous, binary output of the sensor. The current standard for calibrating event cameras relies on either blinking patterns or event-based image reconstruction algorithms. These approaches are difficult to deploy in factory settings and are affected by noise and artifacts degrading the calibration performance. To bridge these limitations, we present E-Calib, a novel, fast, robust, and accurate calibration toolbox for event cameras utilizing the asymmetric circle grid, for its robustness to out-of-focus scenes. E-Calib introduces an efficient reweighted least squares (eRWLS) method for feature extraction of the calibration pattern circles with sub-pixel accuracy and robustness to noise. In addition, a modified hierarchical clustering algorithm is devised to detect the calibration grid apart from the background clutter. The proposed method is tested in a variety of rigorous experiments for different event camera models, on circle grids with different geometric properties, on varying calibration trajectories and speeds, and under challenging illumination conditions. The results show that our approach outperforms the state-of-the-art in detection success rate, reprojection error, and pose estimation accuracy.
事件相机因其异步性、低延迟和高动态范围而引发了计算机视觉领域的范式转变。事件相机的校准对于考虑传感器固有参数和三维感知始终是至关重要的。然而,由于传感器的异步二进制输出,传统的基于图像的校准技术并不适用。目前校准事件相机的标准依赖于闪烁模式或基于事件的图像重建算法。这些方法很难在工厂设置中使用,而且会受到噪声和伪影的影响,降低校准性能。为了弥补这些局限性,我们推出了 E-Calib,这是一款新颖、快速、稳健、精确的校准工具箱,适用于利用非对称圆网格的事件摄像机,对焦外场景具有稳健性。E-Calib 引入了一种高效的重新加权最小二乘法(eRWLS)方法,用于校准图案圆的特征提取,具有亚像素精度和对噪声的鲁棒性。此外,还设计了一种改进的分层聚类算法,用于检测除背景杂波之外的校准网格。针对不同的事件相机模型、具有不同几何特性的圆网格、不同的校准轨迹和速度以及具有挑战性的光照条件,对所提出的方法进行了各种严格的实验测试。结果表明,我们的方法在检测成功率、重投影误差和姿态估计精度方面都优于最先进的方法。
{"title":"E-Calib: A Fast, Robust, and Accurate Calibration Toolbox for Event Cameras","authors":"Mohammed Salah;Abdulla Ayyad;Muhammad Humais;Daniel Gehrig;Abdelqader Abusafieh;Lakmal Seneviratne;Davide Scaramuzza;Yahya Zweiri","doi":"10.1109/TIP.2024.3410673","DOIUrl":"10.1109/TIP.2024.3410673","url":null,"abstract":"Event cameras triggered a paradigm shift in the computer vision community delineated by their asynchronous nature, low latency, and high dynamic range. Calibration of event cameras is always essential to account for the sensor intrinsic parameters and for 3D perception. However, conventional image-based calibration techniques are not applicable due to the asynchronous, binary output of the sensor. The current standard for calibrating event cameras relies on either blinking patterns or event-based image reconstruction algorithms. These approaches are difficult to deploy in factory settings and are affected by noise and artifacts degrading the calibration performance. To bridge these limitations, we present E-Calib, a novel, fast, robust, and accurate calibration toolbox for event cameras utilizing the asymmetric circle grid, for its robustness to out-of-focus scenes. E-Calib introduces an efficient reweighted least squares (eRWLS) method for feature extraction of the calibration pattern circles with sub-pixel accuracy and robustness to noise. In addition, a modified hierarchical clustering algorithm is devised to detect the calibration grid apart from the background clutter. The proposed method is tested in a variety of rigorous experiments for different event camera models, on circle grids with different geometric properties, on varying calibration trajectories and speeds, and under challenging illumination conditions. The results show that our approach outperforms the state-of-the-art in detection success rate, reprojection error, and pose estimation accuracy.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10555516","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expanding and Refining Hybrid Compressors for Efficient Object Re-Identification 扩展和改进混合压缩机,实现高效的对象再识别。
Yi Xie;Hanxiao Wu;Jianqing Zhu;Huanqiang Zeng;Jing Zhang
Recent object re-identification (Re-ID) methods gain high efficiency via lightweight student models trained by knowledge distillation (KD). However, the huge architectural difference between lightweight students and heavy teachers causes students to have difficulties in receiving and understanding teachers’ knowledge, thus losing certain accuracy. To this end, we propose a refiner-expander-refiner (RER) structure to enlarge a student’s representational capacity and prune the student’s complexity. The expander is a multi-branch convolutional layer to expand the student’s representational capacity to understand a teacher’s knowledge comprehensively, which does not require any feature-dimensional adapter to avoid knowledge distortions. The two refiners are $1times 1$ convolutional layers to prune the input and output channels of the expander. In addition, in order to alleviate the competition accuracy-related and pruning-related gradients, we design a common consensus gradient resetting (CCGR) method, which discards unimportant channels according to the intersection of each sample’s unimportant channel judgment. Finally, the trained RER can be simplified into a slim convolutional layer via re-parameterization to speed up inference. As a result, we propose an expanding and refining hybrid compressing (ERHC) method. Extensive experiments show that our ERHC has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, ERHC saves 75.33% model parameters (MP) and 74.29% floating-point of operations (FLOPs) without sacrificing accuracy.
最近的物体再识别(Re-ID)方法通过知识蒸馏(KD)训练的轻量级学生模型获得了很高的效率。然而,轻量级学生与重度教师之间巨大的结构差异导致学生难以接收和理解教师的知识,从而失去了一定的准确性。为此,我们提出了一种提炼器-扩展器-提炼器(RER)结构,以扩大学生的表征能力,删减学生的复杂性。扩展器是一个多分支卷积层,用于扩展学生的表征能力,以全面理解教师的知识,它不需要任何特征维度的适配器,以避免知识失真。两个细化器为 1 × 1 卷积层,用于修剪扩展器的输入和输出通道。此外,为了缓解竞争精度相关梯度和修剪相关梯度,我们设计了一种共同共识梯度重置(CCGR)方法,根据每个样本不重要通道判断的交集来丢弃不重要通道。最后,经过训练的 RER 可以通过重新参数化简化为纤细的卷积层,从而加快推理速度。因此,我们提出了一种扩展和细化混合压缩(ERHC)方法。广泛的实验表明,我们的ERHC推理速度和准确性都很出色,例如,在VeRi-776数据集上,以ResNet101为老师,ERHC在不牺牲准确性的情况下节省了75.33%的模型参数(MP)和74.29%的浮点运算(FLOPs)。
{"title":"Expanding and Refining Hybrid Compressors for Efficient Object Re-Identification","authors":"Yi Xie;Hanxiao Wu;Jianqing Zhu;Huanqiang Zeng;Jing Zhang","doi":"10.1109/TIP.2024.3410684","DOIUrl":"10.1109/TIP.2024.3410684","url":null,"abstract":"Recent object re-identification (Re-ID) methods gain high efficiency via lightweight student models trained by knowledge distillation (KD). However, the huge architectural difference between lightweight students and heavy teachers causes students to have difficulties in receiving and understanding teachers’ knowledge, thus losing certain accuracy. To this end, we propose a refiner-expander-refiner (RER) structure to enlarge a student’s representational capacity and prune the student’s complexity. The expander is a multi-branch convolutional layer to expand the student’s representational capacity to understand a teacher’s knowledge comprehensively, which does not require any feature-dimensional adapter to avoid knowledge distortions. The two refiners are \u0000<inline-formula> <tex-math>$1times 1$ </tex-math></inline-formula>\u0000 convolutional layers to prune the input and output channels of the expander. In addition, in order to alleviate the competition accuracy-related and pruning-related gradients, we design a common consensus gradient resetting (CCGR) method, which discards unimportant channels according to the intersection of each sample’s unimportant channel judgment. Finally, the trained RER can be simplified into a slim convolutional layer via re-parameterization to speed up inference. As a result, we propose an expanding and refining hybrid compressing (ERHC) method. Extensive experiments show that our ERHC has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, ERHC saves 75.33% model parameters (MP) and 74.29% floating-point of operations (FLOPs) without sacrificing accuracy.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141312597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Coding Gain Due to In-Loop Reshaping 环内重塑导致的编码增益分析。
Chau-Wai Wong;Chang-Hong Fu;Mengting Xu;Guan-Ming Su
Reshaping, a point operation that alters the characteristics of signals, has been shown capable of improving the compression ratio in video coding practices. Out-of-loop reshaping that directly modifies the input video signal was first adopted as the supplemental enhancement information (SEI) for the HEVC/H.265 without the need to alter the core design of the video codec. VVC/H.266 further improves the coding efficiency by adopting in-loop reshaping that modifies the residual signal being processed in the hybrid coding loop. In this paper, we theoretically analyze the rate-distortion performance of the in-loop reshaping and use experiments to verify the theoretical result. We prove that the in-loop reshaping can improve coding efficiency when the entropy coder adopted in the coding pipeline is suboptimal, which is in line with the practical scenarios that video codecs operate in. We derive the PSNR gain in a closed form and show that the theoretically predicted gain is consistent with that measured from experiments using standard testing video sequences.
重整形是一种改变信号特性的点操作,在视频编码实践中已被证明能够提高压缩比。作为 HEVC/H.265 的补充增强信息(SEI),首次采用了直接修改输入视频信号的环外重整形,而无需改变视频编解码器的核心设计。VVC/H.266 采用环内重塑技术,修改了混合编码环中处理的残余信号,从而进一步提高了编码效率。本文从理论上分析了环内重塑的速率-失真性能,并通过实验验证了理论结果。我们证明,当编码管道中采用的熵编码器为次优编码器时,环内重塑可以提高编码效率,这与视频编解码器的实际应用场景相符。我们以闭合形式推导出 PSNR 增益,并证明理论预测的增益与使用标准测试视频序列进行实验所测得的增益一致。
{"title":"Analysis of Coding Gain Due to In-Loop Reshaping","authors":"Chau-Wai Wong;Chang-Hong Fu;Mengting Xu;Guan-Ming Su","doi":"10.1109/TIP.2024.3409189","DOIUrl":"10.1109/TIP.2024.3409189","url":null,"abstract":"Reshaping, a point operation that alters the characteristics of signals, has been shown capable of improving the compression ratio in video coding practices. Out-of-loop reshaping that directly modifies the input video signal was first adopted as the supplemental enhancement information (SEI) for the HEVC/H.265 without the need to alter the core design of the video codec. VVC/H.266 further improves the coding efficiency by adopting in-loop reshaping that modifies the residual signal being processed in the hybrid coding loop. In this paper, we theoretically analyze the rate-distortion performance of the in-loop reshaping and use experiments to verify the theoretical result. We prove that the in-loop reshaping can improve coding efficiency when the entropy coder adopted in the coding pipeline is suboptimal, which is in line with the practical scenarios that video codecs operate in. We derive the PSNR gain in a closed form and show that the theoretically predicted gain is consistent with that measured from experiments using standard testing video sequences.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141302304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1