首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
Rethinking Noise Sampling in Class-Imbalanced Diffusion Models 类不平衡扩散模型中的噪声采样反思
Chenghao Xu;Jiexi Yan;Muli Yang;Cheng Deng
In the practical application of image generation, dealing with long-tailed data distributions is a common challenge for diffusion-based generative models. To tackle this issue, we investigate the head-class accumulation effect in diffusion models’ latent space, particularly focusing on its correlation to the noise sampling strategy. Our experimental analysis indicates that employing a consistent sampling distribution for the noise prior across all classes leads to a significant bias towards head classes in the noise sampling distribution, which results in poor quality and diversity of the generated images. Motivated by this observation, we propose a novel sampling strategy named Bias-aware Prior Adjusting (BPA) to debias diffusion models in the class-imbalanced scenario. With BPA, each class is automatically assigned an adaptive noise sampling distribution prior during training, effectively mitigating the influence of class imbalance on the generation process. Extensive experiments on several benchmarks demonstrate that images generated using our proposed BPA showcase elevated diversity and superior quality.
在图像生成的实际应用中,处理长尾数据分布是基于扩散的生成模型面临的共同挑战。为了解决这个问题,我们研究了扩散模型潜空间中的头类累积效应,尤其关注其与噪声采样策略的相关性。我们的实验分析表明,对所有类别的噪声先验采用一致的采样分布会导致噪声采样分布明显偏向头部类别,从而导致生成图像的质量和多样性较差。受此启发,我们提出了一种名为 "偏差感知先验调整"(BPA)的新型采样策略,用于在类不平衡场景中消除扩散模型的偏差。利用 BPA,每个类别在训练过程中都会自动分配一个自适应噪声采样分布先验,从而有效减轻类别不平衡对生成过程的影响。在多个基准上进行的广泛实验证明,使用我们提出的 BPA 生成的图像具有更高的多样性和更优的质量。
{"title":"Rethinking Noise Sampling in Class-Imbalanced Diffusion Models","authors":"Chenghao Xu;Jiexi Yan;Muli Yang;Cheng Deng","doi":"10.1109/TIP.2024.3485484","DOIUrl":"10.1109/TIP.2024.3485484","url":null,"abstract":"In the practical application of image generation, dealing with long-tailed data distributions is a common challenge for diffusion-based generative models. To tackle this issue, we investigate the head-class accumulation effect in diffusion models’ latent space, particularly focusing on its correlation to the noise sampling strategy. Our experimental analysis indicates that employing a consistent sampling distribution for the noise prior across all classes leads to a significant bias towards head classes in the noise sampling distribution, which results in poor quality and diversity of the generated images. Motivated by this observation, we propose a novel sampling strategy named Bias-aware Prior Adjusting (BPA) to debias diffusion models in the class-imbalanced scenario. With BPA, each class is automatically assigned an adaptive noise sampling distribution prior during training, effectively mitigating the influence of class imbalance on the generation process. Extensive experiments on several benchmarks demonstrate that images generated using our proposed BPA showcase elevated diversity and superior quality.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6298-6308"},"PeriodicalIF":0.0,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142541349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Copy-Move Forgery Detection via Deep PatchMatch and Pairwise Ranking Learning 通过深度补丁匹配和成对排序学习进行图像复制移动伪造检测
Yuanman Li;Yingjie He;Changsheng Chen;Li Dong;Bin Li;Jiantao Zhou;Xia Li
Recent advances in deep learning algorithms have shown impressive progress in image copy-move forgery detection (CMFD). However, these algorithms lack generalizability in practical scenarios where the copied regions are not present in the training images, or the cloned regions are part of the background. Additionally, these algorithms utilize convolution operations to distinguish source and target regions, leading to unsatisfactory results when the target regions blend well with the background. To address these limitations, this study proposes a novel end-to-end CMFD framework that integrates the strengths of conventional and deep learning methods. Specifically, the study develops a deep cross-scale PatchMatch (PM) method that is customized for CMFD to locate copy-move regions. Unlike existing deep models, our approach utilizes features extracted from high-resolution scales to seek explicit and reliable point-to-point matching between source and target regions. Furthermore, we propose a novel pairwise rank learning framework to separate source and target regions. By leveraging the strong prior of point-to-point matches, the framework can identify subtle differences and effectively discriminate between source and target regions, even when the target regions blend well with the background. Our framework is fully differentiable and can be trained end-to-end. Comprehensive experimental results highlight the remarkable generalizability of our scheme across various copy-move scenarios, significantly outperforming existing methods.
深度学习算法的最新进展在图像复制-移动伪造检测(CMFD)方面取得了令人印象深刻的进展。然而,在实际场景中,当复制区域不存在于训练图像中,或者克隆区域是背景的一部分时,这些算法缺乏泛化性。此外,这些算法使用卷积操作来区分源区域和目标区域,当目标区域与背景融合良好时,结果并不理想。为了解决这些限制,本研究提出了一个新的端到端CMFD框架,该框架集成了传统和深度学习方法的优势。具体而言,该研究开发了一种深度跨尺度PatchMatch (PM)方法,该方法是为CMFD定制的,用于定位复制移动区域。与现有的深度模型不同,我们的方法利用从高分辨率尺度提取的特征来寻求源和目标区域之间明确可靠的点对点匹配。此外,我们提出了一种新的两两排序学习框架来分离源区域和目标区域。通过利用点对点匹配的强先验,该框架可以识别细微的差异,并有效区分源区域和目标区域,即使目标区域与背景融合得很好。我们的框架是完全可微分的,可以端到端进行训练。综合实验结果突出了我们的方案在各种复制-移动场景中的显著通用性,显着优于现有方法。
{"title":"Image Copy-Move Forgery Detection via Deep PatchMatch and Pairwise Ranking Learning","authors":"Yuanman Li;Yingjie He;Changsheng Chen;Li Dong;Bin Li;Jiantao Zhou;Xia Li","doi":"10.1109/TIP.2024.3482191","DOIUrl":"10.1109/TIP.2024.3482191","url":null,"abstract":"Recent advances in deep learning algorithms have shown impressive progress in image copy-move forgery detection (CMFD). However, these algorithms lack generalizability in practical scenarios where the copied regions are not present in the training images, or the cloned regions are part of the background. Additionally, these algorithms utilize convolution operations to distinguish source and target regions, leading to unsatisfactory results when the target regions blend well with the background. To address these limitations, this study proposes a novel end-to-end CMFD framework that integrates the strengths of conventional and deep learning methods. Specifically, the study develops a deep cross-scale PatchMatch (PM) method that is customized for CMFD to locate copy-move regions. Unlike existing deep models, our approach utilizes features extracted from high-resolution scales to seek explicit and reliable point-to-point matching between source and target regions. Furthermore, we propose a novel pairwise rank learning framework to separate source and target regions. By leveraging the strong prior of point-to-point matches, the framework can identify subtle differences and effectively discriminate between source and target regions, even when the target regions blend well with the background. Our framework is fully differentiable and can be trained end-to-end. Comprehensive experimental results highlight the remarkable generalizability of our scheme across various copy-move scenarios, significantly outperforming existing methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"425-440"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142490459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
λ-Domain Rate Control via Wavelet-Based Residual Neural Network for VVC HDR Intra Coding 通过基于小波的残差神经网络实现 VVC HDR 内编码的 λ 域速率控制
Feng Yuan;Jianjun Lei;Zhaoqing Pan;Bo Peng;Haoran Xie
High dynamic range (HDR) video offers a more realistic visual experience than standard dynamic range (SDR) video, while introducing new challenges to both compression and transmission. Rate control is an effective technology to overcome these challenges, and ensure optimal HDR video delivery. However, the rate control algorithm in the latest video coding standard, versatile video coding (VVC), is tailored to SDR videos, and does not produce well coding results when encoding HDR videos. To address this problem, a data-driven $lambda $ -domain rate control algorithm is proposed for VVC HDR intra frames in this paper. First, the coding characteristics of HDR intra coding are analyzed, and a piecewise R- $lambda $ model is proposed to accurately determine the correlation between the rate (R) and the Lagrange parameter $lambda $ for HDR intra frames. Then, to optimize bit allocation at the coding tree unit (CTU)-level, a wavelet-based residual neural network (WRNN) is developed to accurately predict the parameters of the piecewise R- $lambda $ model for each CTU. Third, a large-scale HDR dataset is established for training WRNN, which facilitates the applications of deep learning in HDR intra coding. Extensive experimental results show that our proposed HDR intra frame rate control algorithm achieves superior coding results than the state-of-the-art algorithms. The source code of this work will be released at https://github.com/TJU-Videocoding/WRNN.git.
与标准动态范围(SDR)视频相比,高动态范围(HDR)视频提供了更逼真的视觉体验,同时也给压缩和传输带来了新的挑战。速率控制是克服这些挑战并确保最佳 HDR 视频传输的有效技术。然而,最新视频编码标准多功能视频编码(VVC)中的速率控制算法是为 SDR 视频量身定制的,在编码 HDR 视频时不能产生良好的编码效果。针对这一问题,本文提出了一种针对 VVC HDR 内帧的数据驱动 $lambda $ 域速率控制算法。首先,分析了 HDR 内编码的编码特性,并提出了片式 R- $lambda $ 模型,以准确确定 HDR 内帧的速率(R)与拉格朗日参数 $lambda $ 之间的相关性。然后,为了优化编码树单元(CTU)级别的比特分配,开发了基于小波的残差神经网络(WRNN),以准确预测每个 CTU 的片式 R- $/lambda $ 模型参数。第三,建立了用于训练 WRNN 的大规模 HDR 数据集,促进了深度学习在 HDR 内部编码中的应用。大量实验结果表明,我们提出的 HDR 帧内速率控制算法的编码效果优于最先进的算法。这项工作的源代码将在 https://github.com/TJU-Videocoding/WRNN.git 上发布。
{"title":"λ-Domain Rate Control via Wavelet-Based Residual Neural Network for VVC HDR Intra Coding","authors":"Feng Yuan;Jianjun Lei;Zhaoqing Pan;Bo Peng;Haoran Xie","doi":"10.1109/TIP.2024.3484173","DOIUrl":"10.1109/TIP.2024.3484173","url":null,"abstract":"High dynamic range (HDR) video offers a more realistic visual experience than standard dynamic range (SDR) video, while introducing new challenges to both compression and transmission. Rate control is an effective technology to overcome these challenges, and ensure optimal HDR video delivery. However, the rate control algorithm in the latest video coding standard, versatile video coding (VVC), is tailored to SDR videos, and does not produce well coding results when encoding HDR videos. To address this problem, a data-driven \u0000<inline-formula> <tex-math>$lambda $ </tex-math></inline-formula>\u0000-domain rate control algorithm is proposed for VVC HDR intra frames in this paper. First, the coding characteristics of HDR intra coding are analyzed, and a piecewise R-\u0000<inline-formula> <tex-math>$lambda $ </tex-math></inline-formula>\u0000 model is proposed to accurately determine the correlation between the rate (R) and the Lagrange parameter \u0000<inline-formula> <tex-math>$lambda $ </tex-math></inline-formula>\u0000 for HDR intra frames. Then, to optimize bit allocation at the coding tree unit (CTU)-level, a wavelet-based residual neural network (WRNN) is developed to accurately predict the parameters of the piecewise R-\u0000<inline-formula> <tex-math>$lambda $ </tex-math></inline-formula>\u0000 model for each CTU. Third, a large-scale HDR dataset is established for training WRNN, which facilitates the applications of deep learning in HDR intra coding. Extensive experimental results show that our proposed HDR intra frame rate control algorithm achieves superior coding results than the state-of-the-art algorithms. The source code of this work will be released at \u0000<uri>https://github.com/TJU-Videocoding/WRNN.git</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6189-6203"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142490613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AS2LS: Adaptive Anatomical Structure-Based Two-Layer Level Set Framework for Medical Image Segmentation AS2LS:用于医学图像分割的基于解剖结构的自适应双层水平集框架
Tianyi Han;Haoyu Cao;Yunyun Yang
Medical images often exhibit intricate structures, inhomogeneous intensity, significant noise and blurred edges, presenting challenges for medical image segmentation. Several segmentation algorithms grounded in mathematics, computer science, and medical domains have been proposed to address this matter; nevertheless, there is still considerable scope for improvement. This paper proposes a novel adaptive anatomical structure-based two-layer level set framework (AS2LS) for segmenting organs with concentric structures, such as the left ventricle and the fundus. By adaptive fitting region and edge intensity information, the AS2LS achieves high accuracy in segmenting complex medical images characterized by inhomogeneous intensity, blurred boundaries and interference from surrounding organs. Moreover, we introduce a novel two-layer level set representation based on anatomical structures, coupled with a two-stage level set evolution algorithm. Experimental results demonstrate the superior accuracy of AS2LS in comparison to representative level set methods and deep learning methods.
医学图像通常结构复杂、强度不均、噪声严重、边缘模糊,这给医学图像分割带来了挑战。针对这一问题,数学、计算机科学和医学领域提出了多种分割算法,但仍有相当大的改进空间。本文提出了一种新颖的基于解剖结构的自适应双层水平集框架(AS2LS),用于分割具有同心结构的器官,如左心室和眼底。通过自适应拟合区域和边缘强度信息,AS2LS 在分割具有不均匀强度、模糊边界和周围器官干扰等特征的复杂医学图像时实现了高精度。此外,我们还引入了一种基于解剖结构的新型两层水平集表示法,并结合了两阶段水平集演化算法。实验结果表明,与代表性水平集方法和深度学习方法相比,AS2LS 具有更高的准确性。
{"title":"AS2LS: Adaptive Anatomical Structure-Based Two-Layer Level Set Framework for Medical Image Segmentation","authors":"Tianyi Han;Haoyu Cao;Yunyun Yang","doi":"10.1109/TIP.2024.3483563","DOIUrl":"10.1109/TIP.2024.3483563","url":null,"abstract":"Medical images often exhibit intricate structures, inhomogeneous intensity, significant noise and blurred edges, presenting challenges for medical image segmentation. Several segmentation algorithms grounded in mathematics, computer science, and medical domains have been proposed to address this matter; nevertheless, there is still considerable scope for improvement. This paper proposes a novel adaptive anatomical structure-based two-layer level set framework (AS2LS) for segmenting organs with concentric structures, such as the left ventricle and the fundus. By adaptive fitting region and edge intensity information, the AS2LS achieves high accuracy in segmenting complex medical images characterized by inhomogeneous intensity, blurred boundaries and interference from surrounding organs. Moreover, we introduce a novel two-layer level set representation based on anatomical structures, coupled with a two-stage level set evolution algorithm. Experimental results demonstrate the superior accuracy of AS2LS in comparison to representative level set methods and deep learning methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6393-6408"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142489751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Based Domain Adaptation Without Intermediate Domain Dataset for Foggy Scene Segmentation 基于能量的无中间域数据集域自适应雾天场景分割技术
Donggon Jang;Sunhyeok Lee;Gyuwon Choi;Yejin Lee;Sanghyeok Son;Dae-Shik Kim
Robust segmentation performance under dense fog is crucial for autonomous driving, but collecting labeled real foggy scene datasets is burdensome in the real world. To this end, existing methods have adapted models trained on labeled clear weather images to the unlabeled real foggy domain. However, these approaches require intermediate domain datasets (e.g. synthetic fog) and involve multi-stage training, making them cumbersome and less practical for real-world applications. In addition, the issue of overconfident pseudo-labels by a confidence score remains less explored in self-training for foggy scene adaptation. To resolve these issues, we propose a new framework, named DAEN, which Directly Adapts without additional datasets or multi-stage training and leverages an ENergy score in self-training. Notably, we integrate a High-order Style Matching (HSM) module into the network to match high-order statistics between clear weather features and real foggy features. HSM enables the network to implicitly learn complex fog distributions without relying on intermediate domain datasets or multi-stage training. Furthermore, we introduce Energy Score-based Pseudo-Labeling (ESPL) to mitigate the overconfidence issue of the confidence score in self-training. ESPL generates more reliable pseudo-labels through a pixel-wise energy score, thereby alleviating bias and preventing the model from assigning pseudo-labels exclusively to head classes. Extensive experiments demonstrate that DAEN achieves state-of-the-art performance on three real foggy scene datasets and exhibits a generalization ability to other adverse weather conditions. Code is available at https://github.com/jdg900/daen
浓雾下的稳健分割性能对自动驾驶至关重要,但在现实世界中,收集有标记的真实雾景数据集是一项繁重的工作。为此,现有方法已将在有标签的晴朗天气图像上训练的模型调整到无标签的真实雾域。然而,这些方法需要中间域数据集(如合成雾),并涉及多阶段训练,因此非常麻烦,在实际应用中不那么实用。此外,在雾场景自适应的自我训练中,通过置信度得分进行过度置信伪标签的问题仍然较少被探讨。为了解决这些问题,我们提出了一个名为 DAEN 的新框架,该框架无需额外的数据集或多阶段训练即可直接适应,并在自我训练中利用 ENergy 分数。值得注意的是,我们在网络中集成了高阶风格匹配(HSM)模块,以匹配晴朗天气特征和真实雾天特征之间的高阶统计数据。HSM 使网络能够隐式学习复杂的雾分布,而无需依赖中间域数据集或多阶段训练。此外,我们还引入了基于能量得分的伪标记(ESPL),以减轻自我训练中置信度得分的过度置信问题。ESPL 通过像素能量得分生成更可靠的伪标签,从而减轻偏差并防止模型将伪标签完全分配给头部类别。广泛的实验证明,DAEN 在三个真实的大雾场景数据集上取得了最先进的性能,并表现出了对其他恶劣天气条件的泛化能力。代码见 https://github.com/jdg900/daen
{"title":"Energy-Based Domain Adaptation Without Intermediate Domain Dataset for Foggy Scene Segmentation","authors":"Donggon Jang;Sunhyeok Lee;Gyuwon Choi;Yejin Lee;Sanghyeok Son;Dae-Shik Kim","doi":"10.1109/TIP.2024.3483566","DOIUrl":"10.1109/TIP.2024.3483566","url":null,"abstract":"Robust segmentation performance under dense fog is crucial for autonomous driving, but collecting labeled real foggy scene datasets is burdensome in the real world. To this end, existing methods have adapted models trained on labeled clear weather images to the unlabeled real foggy domain. However, these approaches require intermediate domain datasets (e.g. synthetic fog) and involve multi-stage training, making them cumbersome and less practical for real-world applications. In addition, the issue of overconfident pseudo-labels by a confidence score remains less explored in self-training for foggy scene adaptation. To resolve these issues, we propose a new framework, named DAEN, which Directly Adapts without additional datasets or multi-stage training and leverages an ENergy score in self-training. Notably, we integrate a High-order Style Matching (HSM) module into the network to match high-order statistics between clear weather features and real foggy features. HSM enables the network to implicitly learn complex fog distributions without relying on intermediate domain datasets or multi-stage training. Furthermore, we introduce Energy Score-based Pseudo-Labeling (ESPL) to mitigate the overconfidence issue of the confidence score in self-training. ESPL generates more reliable pseudo-labels through a pixel-wise energy score, thereby alleviating bias and preventing the model from assigning pseudo-labels exclusively to head classes. Extensive experiments demonstrate that DAEN achieves state-of-the-art performance on three real foggy scene datasets and exhibits a generalization ability to other adverse weather conditions. Code is available at \u0000<uri>https://github.com/jdg900/daen</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6143-6157"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142489425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MA-ST3D: Motion Associated Self-Training for Unsupervised Domain Adaptation on 3D Object Detection MA-ST3D:用于三维物体检测无监督领域自适应的运动关联自我训练
Chi Zhang;Wenbo Chen;Wei Wang;Zhaoxiang Zhang
Recently, unsupervised domain adaptation (UDA) for 3D object detectors has increasingly garnered attention as a method to eliminate the prohibitive costs associated with generating extensive 3D annotations, which are crucial for effective model training. Self-training (ST) has emerged as a simple and effective technique for UDA. The major issue involved in ST-UDA for 3D object detection is refining the imprecise predictions caused by domain shift and generating accurate pseudo labels as supervisory signals. This study presents a novel ST-UDA framework to generate high-quality pseudo labels by associating predictions of 3D point cloud sequences during ego-motion according to spatial and temporal consistency, named motion-associated self-training for 3D object detection (MA-ST3D). MA-ST3D maintains a global-local pathway (GLP) architecture to generate high-quality pseudo-labels by leveraging both intra-frame and inter-frame consistencies along the spatial dimension of the LiDAR’s ego-motion. It also equips two memory modules for both global and local pathways, called global memory and local memory, to suppress the temporal fluctuation of pseudo-labels during self-training iterations. In addition, a motion-aware loss is introduced to impose discriminated regulations on pseudo labels with different motion statuses, which mitigates the harmful spread of false positive pseudo labels. Finally, our method is evaluated on three representative domain adaptation tasks on authoritative 3D benchmark datasets (i.e. Waymo, Kitti, and nuScenes). MA-ST3D achieved SOTA performance on all evaluated UDA settings and even surpassed the weakly supervised DA methods on the Kitti and NuScenes object detection benchmark.
最近,用于三维物体检测器的无监督领域适应(UDA)越来越受到关注,因为这种方法可以消除与生成大量三维注释相关的高昂成本,而注释对于有效的模型训练至关重要。自我训练(ST)已成为一种简单有效的 UDA 技术。用于三维物体检测的 ST-UDA 所涉及的主要问题是完善域偏移导致的不精确预测,并生成准确的伪标签作为监督信号。本研究提出了一种新颖的 ST-UDA 框架,通过在自我运动过程中根据空间和时间一致性关联三维点云序列的预测来生成高质量的伪标签,该框架被命名为运动关联自我训练三维物体检测(MA-ST3D)。MA-ST3D 采用全局-局部路径(GLP)架构,利用激光雷达自我运动空间维度的帧内和帧间一致性生成高质量的伪标签。它还为全局和局部路径配备了两个记忆模块,分别称为全局记忆和局部记忆,以抑制伪标签在自我训练迭代过程中的时间波动。此外,我们还引入了运动感知损耗,对不同运动状态的伪标签进行区分管理,从而减少伪标签假阳性的有害传播。最后,我们的方法在权威 3D 基准数据集(即 Waymo、Kitti 和 nuScenes)上的三个代表性领域适应任务中进行了评估。MA-ST3D 在所有评估的 UDA 设置上都取得了 SOTA 性能,甚至在 Kitti 和 NuScenes 物体检测基准上超过了弱监督 DA 方法。
{"title":"MA-ST3D: Motion Associated Self-Training for Unsupervised Domain Adaptation on 3D Object Detection","authors":"Chi Zhang;Wenbo Chen;Wei Wang;Zhaoxiang Zhang","doi":"10.1109/TIP.2024.3482976","DOIUrl":"10.1109/TIP.2024.3482976","url":null,"abstract":"Recently, unsupervised domain adaptation (UDA) for 3D object detectors has increasingly garnered attention as a method to eliminate the prohibitive costs associated with generating extensive 3D annotations, which are crucial for effective model training. Self-training (ST) has emerged as a simple and effective technique for UDA. The major issue involved in ST-UDA for 3D object detection is refining the imprecise predictions caused by domain shift and generating accurate pseudo labels as supervisory signals. This study presents a novel ST-UDA framework to generate high-quality pseudo labels by associating predictions of 3D point cloud sequences during ego-motion according to spatial and temporal consistency, named motion-associated self-training for 3D object detection (MA-ST3D). MA-ST3D maintains a global-local pathway (GLP) architecture to generate high-quality pseudo-labels by leveraging both intra-frame and inter-frame consistencies along the spatial dimension of the LiDAR’s ego-motion. It also equips two memory modules for both global and local pathways, called global memory and local memory, to suppress the temporal fluctuation of pseudo-labels during self-training iterations. In addition, a motion-aware loss is introduced to impose discriminated regulations on pseudo labels with different motion statuses, which mitigates the harmful spread of false positive pseudo labels. Finally, our method is evaluated on three representative domain adaptation tasks on authoritative 3D benchmark datasets (i.e. Waymo, Kitti, and nuScenes). MA-ST3D achieved SOTA performance on all evaluated UDA settings and even surpassed the weakly supervised DA methods on the Kitti and NuScenes object detection benchmark.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6227-6240"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142489490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deblurring Videos Using Spatial-Temporal Contextual Transformer With Feature Propagation 使用带有特征传播功能的时空上下文变换器对视频进行去模糊处理
Liyan Zhang;Boming Xu;Zhongbao Yang;Jinshan Pan
We present a simple and effective approach to explore both local spatial-temporal contexts and non-local temporal information for video deblurring. First, we develop an effective spatial-temporal contextual transformer to explore local spatial-temporal contexts from videos. As the features extracted by the spatial-temporal contextual transformer does not model the non-local temporal information of video well, we then develop a feature propagation method to aggregate useful features from the long-range frames so that both local spatial-temporal contexts and non-local temporal information can be better utilized for video deblurring. Finally, we formulate the spatial-temporal contextual transformer with the feature propagation into a unified deep convolutional neural network (CNN) and train it in an end-to-end manner. We show that using the spatial-temporal contextual transformer with the feature propagation is able to generate useful features and makes the deep CNN model more compact and effective for video deblurring. Extensive experimental results show that the proposed method performs favorably against state-of-the-art ones on the benchmark datasets in terms of accuracy and model parameters.
我们提出了一种简单有效的方法,既能探索视频去模糊的局部时空背景,又能探索非局部时空信息。首先,我们开发了一种有效的时空上下文转换器来探索视频中的局部时空上下文。由于空间-时间上下文变换器提取的特征不能很好地模拟视频的非局部时间信息,因此我们开发了一种特征传播方法,从远距离帧中汇集有用的特征,从而更好地利用局部空间-时间上下文和非局部时间信息进行视频去模糊。最后,我们将空间-时间上下文转换器与特征传播技术整合为一个统一的深度卷积神经网络(CNN),并以端到端的方式对其进行训练。我们的研究表明,将时空上下文变换器与特征传播相结合能够生成有用的特征,并使深度卷积神经网络模型更紧凑、更有效地用于视频去模糊。广泛的实验结果表明,在基准数据集上,所提出的方法在准确性和模型参数方面都优于最先进的方法。
{"title":"Deblurring Videos Using Spatial-Temporal Contextual Transformer With Feature Propagation","authors":"Liyan Zhang;Boming Xu;Zhongbao Yang;Jinshan Pan","doi":"10.1109/TIP.2024.3482176","DOIUrl":"10.1109/TIP.2024.3482176","url":null,"abstract":"We present a simple and effective approach to explore both local spatial-temporal contexts and non-local temporal information for video deblurring. First, we develop an effective spatial-temporal contextual transformer to explore local spatial-temporal contexts from videos. As the features extracted by the spatial-temporal contextual transformer does not model the non-local temporal information of video well, we then develop a feature propagation method to aggregate useful features from the long-range frames so that both local spatial-temporal contexts and non-local temporal information can be better utilized for video deblurring. Finally, we formulate the spatial-temporal contextual transformer with the feature propagation into a unified deep convolutional neural network (CNN) and train it in an end-to-end manner. We show that using the spatial-temporal contextual transformer with the feature propagation is able to generate useful features and makes the deep CNN model more compact and effective for video deblurring. Extensive experimental results show that the proposed method performs favorably against state-of-the-art ones on the benchmark datasets in terms of accuracy and model parameters.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6354-6366"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142489748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Few-Shot Out-of-Distribution Detection With Pre-Trained Model Features 利用预训练模型特征增强少发分布外检测能力
Jiuqing Dong;Yifan Yao;Wei Jin;Heng Zhou;Yongbin Gao;Zhijun Fang
Ensuring the reliability of open-world intelligent systems heavily relies on effective out-of-distribution (OOD) detection. Despite notable successes in existing OOD detection methods, their performance in scenarios with limited training samples is still suboptimal. Therefore, we first construct a comprehensive few-shot OOD detection benchmark in this paper. Remarkably, our investigation reveals that Parameter-Efficient Fine-Tuning (PEFT) techniques, such as visual prompt tuning and visual adapter tuning, outperform traditional methods like fully fine-tuning and linear probing tuning in few-shot OOD detection. Considering that some valuable information from the pre-trained model, which is conducive to OOD detection, may be lost during the fine-tuning process, we reutilize features from the pre-trained models to mitigate this issue. Specifically, we first propose a training-free approach, termed uncertainty score ensemble (USE). This method integrates feature-matching scores to enhance existing OOD detection methods, significantly narrowing the gap between traditional fine-tuning and PEFT techniques. However, due to its training-free property, this method is unable to improve in-distribution accuracy. To this end, we further propose a method called Domain-Specific and General Knowledge Fusion (DSGF) to improve few-shot OOD detection performance and ID accuracy under different fine-tuning paradigms. Experiment results demonstrate that DSGF enhances few-shot OOD detection across different fine-tuning strategies, shot settings, and OOD detection methods. We believe our work can provide the research community with a novel path to leveraging large-scale visual pre-trained models for addressing FS-OOD detection. The code will be released.
保证开放世界智能系统的可靠性在很大程度上依赖于有效的分布外(OOD)检测。尽管现有的OOD检测方法取得了显著的成功,但它们在训练样本有限的情况下的性能仍然不够理想。因此,本文首先构建了一个综合的少弹OOD检测基准。值得注意的是,我们的研究表明,参数高效微调(PEFT)技术,如视觉提示调谐和视觉适配器调谐,在少量OOD检测中优于传统方法,如完全微调和线性探测调谐。考虑到预训练模型中一些有利于OOD检测的有价值信息可能在微调过程中丢失,我们利用预训练模型中的特征来缓解这一问题。具体来说,我们首先提出了一种不需要训练的方法,称为不确定性评分集成(USE)。该方法集成了特征匹配分数,增强了现有的OOD检测方法,显著缩小了传统微调与PEFT技术之间的差距。然而,由于该方法不需要训练,因此无法提高分布精度。为此,我们进一步提出了一种称为Domain-Specific and General Knowledge Fusion (DSGF)的方法,以提高不同微调范式下的少射OOD检测性能和ID精度。实验结果表明,DSGF在不同的微调策略、镜头设置和OOD检测方法中都增强了少镜头OOD检测。我们相信我们的工作可以为研究界提供一条利用大规模视觉预训练模型来解决FS-OOD检测的新途径。代码将被发布。
{"title":"Enhancing Few-Shot Out-of-Distribution Detection With Pre-Trained Model Features","authors":"Jiuqing Dong;Yifan Yao;Wei Jin;Heng Zhou;Yongbin Gao;Zhijun Fang","doi":"10.1109/TIP.2024.3468874","DOIUrl":"10.1109/TIP.2024.3468874","url":null,"abstract":"Ensuring the reliability of open-world intelligent systems heavily relies on effective out-of-distribution (OOD) detection. Despite notable successes in existing OOD detection methods, their performance in scenarios with limited training samples is still suboptimal. Therefore, we first construct a comprehensive few-shot OOD detection benchmark in this paper. Remarkably, our investigation reveals that Parameter-Efficient Fine-Tuning (PEFT) techniques, such as visual prompt tuning and visual adapter tuning, outperform traditional methods like fully fine-tuning and linear probing tuning in few-shot OOD detection. Considering that some valuable information from the pre-trained model, which is conducive to OOD detection, may be lost during the fine-tuning process, we reutilize features from the pre-trained models to mitigate this issue. Specifically, we first propose a training-free approach, termed uncertainty score ensemble (USE). This method integrates feature-matching scores to enhance existing OOD detection methods, significantly narrowing the gap between traditional fine-tuning and PEFT techniques. However, due to its training-free property, this method is unable to improve in-distribution accuracy. To this end, we further propose a method called Domain-Specific and General Knowledge Fusion (DSGF) to improve few-shot OOD detection performance and ID accuracy under different fine-tuning paradigms. Experiment results demonstrate that DSGF enhances few-shot OOD detection across different fine-tuning strategies, shot settings, and OOD detection methods. We believe our work can provide the research community with a novel path to leveraging large-scale visual pre-trained models for addressing FS-OOD detection. The code will be released.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6309-6323"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142489747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploration of Learned Lifting-Based Transform Structures for Fully Scalable and Accessible Wavelet-Like Image Compression 探索基于学习提升的变换结构,实现完全可扩展、可访问的小波图像压缩
Xinyue Li;Aous Naman;David Taubman
This paper provides a comprehensive study on features and performance of different ways to incorporate neural networks into lifting-based wavelet-like transforms, within the context of fully scalable and accessible image compression. Specifically, we explore different arrangements of lifting steps, as well as various network architectures for learned lifting operators. Moreover, we examine the impact of the number of learned lifting steps, the number of channels, the number of layers and the support of kernels in each learned lifting operator. To facilitate the study, we investigate two generic training methodologies that are simultaneously appropriate to a wide variety of lifting structures considered. Experimental results ultimately suggest that retaining fixed lifting steps from the base wavelet transform is highly beneficial. Moreover, we demonstrate that employing more learned lifting steps and more layers in each learned lifting operator do not contribute strongly to the compression performance. However, benefits can be obtained by utilizing more channels in each learned lifting operator. Ultimately, the learned wavelet-like transform proposed in this paper achieves over 25% bit-rate savings compared to JPEG 2000 with compact spatial support.
本文以完全可扩展和可访问的图像压缩为背景,全面研究了将神经网络融入基于提升的小波变换的不同方法的特点和性能。具体来说,我们探索了提升步骤的不同安排,以及学习提升算子的各种网络架构。此外,我们还研究了每个学习到的提升算子中学习到的提升步骤数量、通道数量、层数和内核支持的影响。为便于研究,我们研究了两种通用训练方法,它们同时适用于所考虑的各种提升结构。实验结果最终表明,保留基础小波变换的固定提升步骤非常有益。此外,我们还证明,采用更多的学习提升步骤和每个学习提升算子中的更多层次,对压缩性能的贡献并不大。然而,通过在每个学习提升算子中使用更多通道,可以获得更多好处。最终,本文提出的学习型小波变换与具有紧凑空间支持的 JPEG 2000 相比,可节省超过 25% 的比特率。
{"title":"Exploration of Learned Lifting-Based Transform Structures for Fully Scalable and Accessible Wavelet-Like Image Compression","authors":"Xinyue Li;Aous Naman;David Taubman","doi":"10.1109/TIP.2024.3482877","DOIUrl":"10.1109/TIP.2024.3482877","url":null,"abstract":"This paper provides a comprehensive study on features and performance of different ways to incorporate neural networks into lifting-based wavelet-like transforms, within the context of fully scalable and accessible image compression. Specifically, we explore different arrangements of lifting steps, as well as various network architectures for learned lifting operators. Moreover, we examine the impact of the number of learned lifting steps, the number of channels, the number of layers and the support of kernels in each learned lifting operator. To facilitate the study, we investigate two generic training methodologies that are simultaneously appropriate to a wide variety of lifting structures considered. Experimental results ultimately suggest that retaining fixed lifting steps from the base wavelet transform is highly beneficial. Moreover, we demonstrate that employing more learned lifting steps and more layers in each learned lifting operator do not contribute strongly to the compression performance. However, benefits can be obtained by utilizing more channels in each learned lifting operator. Ultimately, the learned wavelet-like transform proposed in this paper achieves over 25% bit-rate savings compared to JPEG 2000 with compact spatial support.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6173-6188"},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142488358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bi-Directionally Fused Boundary Aware Network for Skin Lesion Segmentation 用于皮损分割的双向融合边界感知网络
Feiniu Yuan;Yuhuan Peng;Qinghua Huang;Xuelong Li
It is quite challenging to visually identify skin lesions with irregular shapes, blurred boundaries and large scale variances. Convolutional Neural Network (CNN) extracts more local features with abundant spatial information, while Transformer has the powerful ability to capture more global information but with insufficient spatial details. To overcome the difficulties in discriminating small or blurred skin lesions, we propose a Bi-directionally Fused Boundary Aware Network (BiFBA-Net). To utilize complementary features produced by CNNs and Transformers, we design a dual-encoding structure. Different from existing dual-encoders, our method designs a Bi-directional Attention Gate (Bi-AG) with two inputs and two outputs for crosswise feature fusion. Our Bi-AG accepts two kinds of features from CNN and Transformer encoders, and two attention gates are designed to generate two attention outputs that are sent back to the two encoders. Thus, we implement adequate exchanging of multi-scale information between CNN and Transformer encoders in a bi-directional and attention way. To perfectly restore feature maps, we propose a progressive decoding structure with boundary aware, containing three decoders with six supervised losses. The first decoder is a CNN network for producing more spatial details. The second one is a Partial Decoder (PD) for aggregating high-level features with more semantics. The last one is a Boundary Aware Decoder (BAD) proposed to progressively improve boundary accuracy. Our BAD uses residual structure and Reverse Attention (RA) at different scales to deeply mine structural and spatial details for refining lesion boundaries. Extensive experiments on public datasets show that our BiFBA-Net achieves higher segmentation accuracy, and has much better ability of boundary perceptions than compared methods. It also alleviates both over-segmentation of small lesions and under-segmentation of large ones.
要通过视觉识别形状不规则、边界模糊、尺度差异大的皮肤病变是一项相当具有挑战性的工作。卷积神经网络(CNN)能提取空间信息丰富的局部特征,而变换器(Transformer)则能捕捉更多全局信息,但空间细节不足。为了克服辨别细小或模糊皮损的困难,我们提出了双向融合边界感知网络(BiFBA-Net)。为了利用 CNN 和变换器产生的互补特征,我们设计了一种双编码结构。与现有的双编码器不同,我们的方法设计了一个具有两个输入和两个输出的双向注意门(Bi-AG),用于交叉特征融合。我们的双向注意门(Bi-AG)接受来自 CNN 和 Transformer 编码器的两种特征,并设计了两个注意门来生成两个注意输出,将其送回两个编码器。因此,我们以双向和注意力的方式在 CNN 和变换器编码器之间实现了多尺度信息的充分交换。为了完美还原特征图,我们提出了一种具有边界感知的渐进式解码结构,其中包含三个具有六个监督损失的解码器。第一个解码器是一个 CNN 网络,用于生成更多空间细节。第二个解码器是部分解码器(PD),用于聚合具有更多语义的高级特征。最后一个是边界感知解码器(BAD),用于逐步提高边界准确性。我们的 BAD 使用不同尺度的残余结构和反向注意(RA)来深入挖掘结构和空间细节,以完善病变边界。在公共数据集上进行的大量实验表明,我们的 BiFBA-Net 可实现更高的分割准确度,其边界感知能力也远胜于其他方法。它还能减轻对小病灶的过度分割和对大病灶的分割不足。
{"title":"A Bi-Directionally Fused Boundary Aware Network for Skin Lesion Segmentation","authors":"Feiniu Yuan;Yuhuan Peng;Qinghua Huang;Xuelong Li","doi":"10.1109/TIP.2024.3482864","DOIUrl":"10.1109/TIP.2024.3482864","url":null,"abstract":"It is quite challenging to visually identify skin lesions with irregular shapes, blurred boundaries and large scale variances. Convolutional Neural Network (CNN) extracts more local features with abundant spatial information, while Transformer has the powerful ability to capture more global information but with insufficient spatial details. To overcome the difficulties in discriminating small or blurred skin lesions, we propose a Bi-directionally Fused Boundary Aware Network (BiFBA-Net). To utilize complementary features produced by CNNs and Transformers, we design a dual-encoding structure. Different from existing dual-encoders, our method designs a Bi-directional Attention Gate (Bi-AG) with two inputs and two outputs for crosswise feature fusion. Our Bi-AG accepts two kinds of features from CNN and Transformer encoders, and two attention gates are designed to generate two attention outputs that are sent back to the two encoders. Thus, we implement adequate exchanging of multi-scale information between CNN and Transformer encoders in a bi-directional and attention way. To perfectly restore feature maps, we propose a progressive decoding structure with boundary aware, containing three decoders with six supervised losses. The first decoder is a CNN network for producing more spatial details. The second one is a Partial Decoder (PD) for aggregating high-level features with more semantics. The last one is a Boundary Aware Decoder (BAD) proposed to progressively improve boundary accuracy. Our BAD uses residual structure and Reverse Attention (RA) at different scales to deeply mine structural and spatial details for refining lesion boundaries. Extensive experiments on public datasets show that our BiFBA-Net achieves higher segmentation accuracy, and has much better ability of boundary perceptions than compared methods. It also alleviates both over-segmentation of small lesions and under-segmentation of large ones.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6340-6353"},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142488442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1