首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
Feature Mixture on Pre-Trained Model for Few-Shot Learning 在预训练模型上进行特征混合,以实现快速学习。
Shuo Wang;Jinda Lu;Haiyang Xu;Yanbin Hao;Xiangnan He
Few-shot learning (FSL) aims at recognizing a novel object under limited training samples. A robust feature extractor (backbone) can significantly improve the recognition performance of the FSL model. However, training an effective backbone is a challenging issue since 1) designing and validating structures of backbones are time-consuming and expensive processes, and 2) a backbone trained on the known (base) categories is more inclined to focus on the textures of the objects it learns, which is hard to describe the novel samples. To solve these problems, we propose a feature mixture operation on the pre-trained (fixed) features: 1) We replace a part of the values of the feature map from a novel category with the content of other feature maps to increase the generalizability and diversity of training samples, which avoids retraining a complex backbone with high computational costs. 2) We use the similarities between the features to constrain the mixture operation, which helps the classifier focus on the representations of the novel object where these representations are hidden in the features from the pre-trained backbone with biased training. Experimental studies on five benchmark datasets in both inductive and transductive settings demonstrate the effectiveness of our feature mixture (FM). Specifically, compared with the baseline on the Mini-ImageNet dataset, it achieves 3.8% and 4.2% accuracy improvements for 1 and 5 training samples, respectively. Additionally, the proposed mixture operation can be used to improve other existing FSL methods based on backbone training.
快速学习(FSL)的目的是在有限的训练样本下识别新物体。稳健的特征提取器(骨干网)能显著提高 FSL 模型的识别性能。然而,训练一个有效的骨干是一个具有挑战性的问题,因为:(1)设计和验证骨干结构是一个耗时且昂贵的过程;(2)根据已知(基础)类别训练的骨干更倾向于关注所学对象的纹理,而这很难描述新样本。为了解决这些问题,我们提出了一种对预训练(固定)特征进行特征混合操作的方法:(1) 我们用其他特征图谱的内容替换新类别特征图谱的部分值,以增加训练样本的普适性和多样性,从而避免重新训练复杂的骨干特征图谱而产生高昂的计算成本。(2)我们利用特征之间的相似性来约束混合操作,这有助于分类器专注于新对象的表征,而这些表征隐藏在预先训练的骨干特征中,训练有偏差。在五个基准数据集上进行的归纳和反归纳设置实验研究证明了我们的特征混合(FM)的有效性。具体来说,与 Mini-ImageNet 数据集上的基线相比,它在 1 个和 5 个训练样本上分别提高了 3.8% 和 4.2% 的准确率。此外,所提出的混合操作还可用于改进其他基于骨干训练的现有 FSL 方法。
{"title":"Feature Mixture on Pre-Trained Model for Few-Shot Learning","authors":"Shuo Wang;Jinda Lu;Haiyang Xu;Yanbin Hao;Xiangnan He","doi":"10.1109/TIP.2024.3411452","DOIUrl":"10.1109/TIP.2024.3411452","url":null,"abstract":"Few-shot learning (FSL) aims at recognizing a novel object under limited training samples. A robust feature extractor (backbone) can significantly improve the recognition performance of the FSL model. However, training an effective backbone is a challenging issue since 1) designing and validating structures of backbones are time-consuming and expensive processes, and 2) a backbone trained on the known (base) categories is more inclined to focus on the textures of the objects it learns, which is hard to describe the novel samples. To solve these problems, we propose a feature mixture operation on the pre-trained (fixed) features: 1) We replace a part of the values of the feature map from a novel category with the content of other feature maps to increase the generalizability and diversity of training samples, which avoids retraining a complex backbone with high computational costs. 2) We use the similarities between the features to constrain the mixture operation, which helps the classifier focus on the representations of the novel object where these representations are hidden in the features from the pre-trained backbone with biased training. Experimental studies on five benchmark datasets in both inductive and transductive settings demonstrate the effectiveness of our feature mixture (FM). Specifically, compared with the baseline on the Mini-ImageNet dataset, it achieves 3.8% and 4.2% accuracy improvements for 1 and 5 training samples, respectively. Additionally, the proposed mixture operation can be used to improve other existing FSL methods based on backbone training.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Spatio-Temporal Graph Reasoning for VideoQA With Self-Supervised Event Recognition 通过自我监督事件识别实现视频质量保证的动态时空图推理
Jie Nie;Xin Wang;Runze Hou;Guohao Li;Hong Chen;Wenwu Zhu
Video question answering (VideoQA) requires the ability of comprehensively understanding visual contents in videos. Existing VideoQA models mainly focus on scenarios involving a single event with simple object interactions and leave event-centric scenarios involving multiple events with dynamically complex object interactions largely unexplored. These conventional VideoQA models are usually based on features extracted from the global visual signals, making it difficult to capture the object-level and event-level semantics. Although there exists a recent work utilizing a static spatio-temporal graph to explicitly model object interactions in videos, it ignores the dynamic impact of questions for graph construction and fails to exploit the implicit event-level semantic clues in questions. To overcome these limitations, we propose a Self-supervised Dynamic Graph Reasoning (SDGraphR) model for video question answering (VideoQA). Our SDGraphR model learns a question-guided spatio-temporal graph that dynamically encodes intra-frame spatial correlations and inter-frame correspondences between objects in the videos. Furthermore, the proposed SDGraphR model discovers event-level cues from questions to conduct self-supervised learning with an auxiliary event recognition task, which in turn helps to improve its VideoQA performances without using any extra annotations. We carry out extensive experiments to validate the substantial improvements of our proposed SDGraphR model over existing baselines.
视频问题解答(VideoQA)要求能够全面理解视频中的视觉内容。现有的视频问题解答模型主要关注涉及单一事件和简单对象交互的场景,而对涉及多个事件和动态复杂对象交互的以事件为中心的场景基本没有进行研究。这些传统的视频质量保证模型通常基于从全局视觉信号中提取的特征,因此难以捕捉对象级和事件级语义。虽然最近有研究利用静态时空图对视频中的对象交互进行显式建模,但它忽略了问题对图构建的动态影响,也未能利用问题中隐含的事件级语义线索。为了克服这些局限性,我们提出了用于视频问题解答(VideoQA)的自监督动态图推理(SDGraphR)模型。我们的 SDGraphR 模型学习问题引导的时空图,该图动态编码视频中对象之间的帧内空间相关性和帧间对应关系。此外,所提出的 SDGraphR 模型还能从问题中发现事件级线索,从而利用辅助事件识别任务进行自监督学习,这反过来又有助于在不使用任何额外注释的情况下提高其视频质量保证性能。我们进行了广泛的实验,验证了我们提出的 SDGraphR 模型与现有基线相比有了实质性的改进。
{"title":"Dynamic Spatio-Temporal Graph Reasoning for VideoQA With Self-Supervised Event Recognition","authors":"Jie Nie;Xin Wang;Runze Hou;Guohao Li;Hong Chen;Wenwu Zhu","doi":"10.1109/TIP.2024.3411448","DOIUrl":"10.1109/TIP.2024.3411448","url":null,"abstract":"Video question answering (VideoQA) requires the ability of comprehensively understanding visual contents in videos. Existing VideoQA models mainly focus on scenarios involving a single event with simple object interactions and leave event-centric scenarios involving multiple events with dynamically complex object interactions largely unexplored. These conventional VideoQA models are usually based on features extracted from the global visual signals, making it difficult to capture the object-level and event-level semantics. Although there exists a recent work utilizing a static spatio-temporal graph to explicitly model object interactions in videos, it ignores the dynamic impact of questions for graph construction and fails to exploit the implicit event-level semantic clues in questions. To overcome these limitations, we propose a Self-supervised Dynamic Graph Reasoning (SDGraphR) model for video question answering (VideoQA). Our SDGraphR model learns a question-guided spatio-temporal graph that dynamically encodes intra-frame spatial correlations and inter-frame correspondences between objects in the videos. Furthermore, the proposed SDGraphR model discovers event-level cues from questions to conduct self-supervised learning with an auxiliary event recognition task, which in turn helps to improve its VideoQA performances without using any extra annotations. We carry out extensive experiments to validate the substantial improvements of our proposed SDGraphR model over existing baselines.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple Riemannian Kernel Hashing for Large-Scale Image Set Classification and Retrieval 用于大规模图像集分类和检索的多黎曼核散列。
Xiaobo Shen;Wei Wu;Xiaxin Wang;Yuhui Zheng
Conventional image set methods typically learn from small to medium-sized image set datasets. However, when applied to large-scale image set applications such as classification and retrieval, they face two primary challenges: 1) effectively modeling complex image sets; and 2) efficiently performing tasks. To address the above issues, we propose a novel Multiple Riemannian Kernel Hashing (MRKH) method that leverages the powerful capabilities of Riemannian manifold and Hashing on effective and efficient image set representation. MRKH considers multiple heterogeneous Riemannian manifolds to represent each image set. It introduces a multiple kernel learning framework designed to effectively combine statistics from multiple manifolds, and constructs kernels by selecting a small set of anchor points, enabling efficient scalability for large-scale applications. In addition, MRKH further exploits inter- and intra-modal semantic structure to enhance discrimination. Instead of employing continuous feature to represent each image set, MRKH suggests learning hash code for each image set, thereby achieving efficient computation and storage. We present an iterative algorithm with theoretical convergence guarantee to optimize MRKH, and the computational complexity is linear with the size of dataset. Extensive experiments on five image set benchmark datasets including three large-scale ones demonstrate the proposed method outperforms state-of-the-arts in accuracy and efficiency particularly in large-scale image set classification and retrieval.
传统的图像集方法通常从中小型图像集数据集中学习。然而,当应用于大规模图像集应用(如分类和检索)时,它们面临两个主要挑战:1) 对复杂图像集进行有效建模,以及 2) 高效执行任务。为解决上述问题,我们提出了一种新颖的多黎曼核散列(MRKH)方法,该方法充分利用了黎曼流形和散列在有效和高效图像集表示方面的强大功能。MRKH 考虑用多个异质黎曼流形来表示每个图像集。它引入了一个多核学习框架,旨在有效结合来自多个流形的统计数据,并通过选择一小部分锚点来构建核,从而实现大规模应用的高效可扩展性。此外,MRKH 还进一步利用模态间和模态内的语义结构来增强辨别能力。MRKH 建议学习每个图像集的哈希代码,而不是采用连续特征来表示每个图像集,从而实现高效的计算和存储。我们提出了一种具有理论收敛保证的迭代算法来优化 MRKH,其计算复杂度与数据集的大小成线性关系。在五个图像集基准数据集(包括三个大规模数据集)上进行的广泛实验表明,所提出的方法在准确性和效率上都优于同行,尤其是在大规模图像集分类和检索方面。
{"title":"Multiple Riemannian Kernel Hashing for Large-Scale Image Set Classification and Retrieval","authors":"Xiaobo Shen;Wei Wu;Xiaxin Wang;Yuhui Zheng","doi":"10.1109/TIP.2024.3419414","DOIUrl":"10.1109/TIP.2024.3419414","url":null,"abstract":"Conventional image set methods typically learn from small to medium-sized image set datasets. However, when applied to large-scale image set applications such as classification and retrieval, they face two primary challenges: 1) effectively modeling complex image sets; and 2) efficiently performing tasks. To address the above issues, we propose a novel Multiple Riemannian Kernel Hashing (MRKH) method that leverages the powerful capabilities of Riemannian manifold and Hashing on effective and efficient image set representation. MRKH considers multiple heterogeneous Riemannian manifolds to represent each image set. It introduces a multiple kernel learning framework designed to effectively combine statistics from multiple manifolds, and constructs kernels by selecting a small set of anchor points, enabling efficient scalability for large-scale applications. In addition, MRKH further exploits inter- and intra-modal semantic structure to enhance discrimination. Instead of employing continuous feature to represent each image set, MRKH suggests learning hash code for each image set, thereby achieving efficient computation and storage. We present an iterative algorithm with theoretical convergence guarantee to optimize MRKH, and the computational complexity is linear with the size of dataset. Extensive experiments on five image set benchmark datasets including three large-scale ones demonstrate the proposed method outperforms state-of-the-arts in accuracy and efficiency particularly in large-scale image set classification and retrieval.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression 学习核调制神经表征,实现高效光场压缩
Jinglei Shi;Yihong Xu;Christine Guillemot
Light fields capture 3D scene information by recording light rays emitted from a scene at various orientations. They offer a more immersive perception, compared with classic 2D images, but at the cost of huge data volumes. In this paper, we design a compact neural network representation for the light field compression task. In the same vein as the deep image prior, the neural network takes randomly initialized noise as input and is trained in a supervised manner in order to best reconstruct the target light field Sub-Aperture Images (SAIs). The network is composed of two types of complementary kernels: descriptive kernels (descriptors) that store scene description information learned during training, and modulatory kernels (modulators) that control the rendering of different SAIs from the queried perspectives. To further enhance compactness of the network meanwhile retain high quality of the decoded light field, we propose modulator allocation and apply kernel tensor decomposition techniques, followed by non-uniform quantization and lossless entropy coding. Extensive experiments demonstrate that our method outperforms other state-of-the-art (SOTA) methods by a significant margin in the light field compression task. Moreover, after adapting descriptors, the modulators learned from one light field can be transferred to new light fields for rendering dense views, showing the potential of the solution for view synthesis.
光场通过记录场景在不同方向发出的光线来捕捉三维场景信息。与传统的二维图像相比,光场能提供更身临其境的感觉,但代价是巨大的数据量。在本文中,我们为光场压缩任务设计了一种紧凑的神经网络表示方法。与深度图像先验相同,神经网络将随机初始化的噪声作为输入,并以监督方式进行训练,以最佳方式重建目标光场子孔径图像(SAI)。该网络由两类互补内核组成:描述内核(descriptors)用于存储训练过程中学习到的场景描述信息;调制内核(modulators)用于控制从查询视角渲染不同的 SAI。为了进一步提高网络的紧凑性,同时保持高质量的解码光场,我们提出了调制器分配方案,并应用了内核张量分解技术,随后进行了非均匀量化和无损熵编码。大量实验证明,在光场压缩任务中,我们的方法明显优于其他最先进的(SOTA)方法。此外,在调整描述符后,从一个光场学习到的调制器可以转移到新的光场,用于渲染密集视图,这显示了该解决方案在视图合成方面的潜力。
{"title":"Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression","authors":"Jinglei Shi;Yihong Xu;Christine Guillemot","doi":"10.1109/TIP.2024.3418670","DOIUrl":"10.1109/TIP.2024.3418670","url":null,"abstract":"Light fields capture 3D scene information by recording light rays emitted from a scene at various orientations. They offer a more immersive perception, compared with classic 2D images, but at the cost of huge data volumes. In this paper, we design a compact neural network representation for the light field compression task. In the same vein as the deep image prior, the neural network takes randomly initialized noise as input and is trained in a supervised manner in order to best reconstruct the target light field Sub-Aperture Images (SAIs). The network is composed of two types of complementary kernels: descriptive kernels (descriptors) that store scene description information learned during training, and modulatory kernels (modulators) that control the rendering of different SAIs from the queried perspectives. To further enhance compactness of the network meanwhile retain high quality of the decoded light field, we propose modulator allocation and apply kernel tensor decomposition techniques, followed by non-uniform quantization and lossless entropy coding. Extensive experiments demonstrate that our method outperforms other state-of-the-art (SOTA) methods by a significant margin in the light field compression task. Moreover, after adapting descriptors, the modulators learned from one light field can be transferred to new light fields for rendering dense views, showing the potential of the solution for view synthesis.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141478181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Discover Knowledge: A Weakly-Supervised Partial Domain Adaptation Approach 学习发现知识:弱监督部分领域适应方法
Mengcheng Lan;Min Meng;Jun Yu;Jigang Wu
Domain adaptation has shown appealing performance by leveraging knowledge from a source domain with rich annotations. However, for a specific target task, it is cumbersome to collect related and high-quality source domains. In real-world scenarios, large-scale datasets corrupted with noisy labels are easy to collect, stimulating a great demand for automatic recognition in a generalized setting, i.e., weakly-supervised partial domain adaptation (WS-PDA), which transfers a classifier from a large source domain with noises in labels to a small unlabeled target domain. As such, the key issues of WS-PDA are: 1) how to sufficiently discover the knowledge from the noisy labeled source domain and the unlabeled target domain, and 2) how to successfully adapt the knowledge across domains. In this paper, we propose a simple yet effective domain adaptation approach, termed as self-paced transfer classifier learning (SP-TCL), to address the above issues, which could be regarded as a well-performing baseline for several generalized domain adaptation tasks. The proposed model is established upon the self-paced learning scheme, seeking a preferable classifier for the target domain. Specifically, SP-TCL learns to discover faithful knowledge via a carefully designed prudent loss function and simultaneously adapts the learned knowledge to the target domain by iteratively excluding source examples from training under the self-paced fashion. Extensive evaluations on several benchmark datasets demonstrate that SP-TCL significantly outperforms state-of-the-art approaches on several generalized domain adaptation tasks. Code is available at https://github.com/mc-lan/SP-TCL.
通过利用具有丰富注释的源领域知识,领域适配技术已显示出令人满意的性能。然而,对于特定的目标任务来说,收集相关的高质量源域非常麻烦。在现实世界中,很容易收集到被噪声标签破坏的大规模数据集,这激发了对通用环境下自动识别的巨大需求,即弱监督部分域自适应(WS-PDA),它将分类器从带有噪声标签的大型源域转移到小型无标签目标域。因此,WS-PDA 的关键问题在于1) 如何从有噪声标签的源域和无标签的目标域中充分发现知识,以及 2) 如何成功地跨域调整知识。针对上述问题,我们在本文中提出了一种简单而有效的域适应方法,即自定步调迁移分类器学习(SP-TCL),它可被视为若干通用域适应任务的性能良好的基线。所提出的模型建立在自定步调学习方案的基础上,为目标领域寻找一个更合适的分类器。具体来说,SP-TCL 通过精心设计的谨慎损失函数学习发现忠实知识,同时通过在自定步调方式下从训练中迭代排除源示例,将所学知识适应于目标领域。在多个基准数据集上进行的广泛评估表明,SP-TCL 在多个广义领域适应任务上的表现明显优于最先进的方法。代码见 https://github.com/mc-lan/SP-TCL。
{"title":"Learning to Discover Knowledge: A Weakly-Supervised Partial Domain Adaptation Approach","authors":"Mengcheng Lan;Min Meng;Jun Yu;Jigang Wu","doi":"10.1109/TIP.2024.3418581","DOIUrl":"10.1109/TIP.2024.3418581","url":null,"abstract":"Domain adaptation has shown appealing performance by leveraging knowledge from a source domain with rich annotations. However, for a specific target task, it is cumbersome to collect related and high-quality source domains. In real-world scenarios, large-scale datasets corrupted with noisy labels are easy to collect, stimulating a great demand for automatic recognition in a generalized setting, i.e., weakly-supervised partial domain adaptation (WS-PDA), which transfers a classifier from a large source domain with noises in labels to a small unlabeled target domain. As such, the key issues of WS-PDA are: 1) how to sufficiently discover the knowledge from the noisy labeled source domain and the unlabeled target domain, and 2) how to successfully adapt the knowledge across domains. In this paper, we propose a simple yet effective domain adaptation approach, termed as self-paced transfer classifier learning (SP-TCL), to address the above issues, which could be regarded as a well-performing baseline for several generalized domain adaptation tasks. The proposed model is established upon the self-paced learning scheme, seeking a preferable classifier for the target domain. Specifically, SP-TCL learns to discover faithful knowledge via a carefully designed prudent loss function and simultaneously adapts the learned knowledge to the target domain by iteratively excluding source examples from training under the self-paced fashion. Extensive evaluations on several benchmark datasets demonstrate that SP-TCL significantly outperforms state-of-the-art approaches on several generalized domain adaptation tasks. Code is available at \u0000<uri>https://github.com/mc-lan/SP-TCL</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141478182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Quality Assessment: Measuring Perceptual Degradation via Distribution Measures in Deep Feature Spaces 图像质量评估:通过深度特征空间中的分布度量测量感知退化
Xingran Liao;Xuekai Wei;Mingliang Zhou;Zhengguo Li;Sam Kwong
This study aims to develop advanced and training-free full-reference image quality assessment (FR-IQA) models based on deep neural networks. Specifically, we investigate measures that allow us to perceptually compare deep network features and reveal their underlying factors. We find that distribution measures enjoy advanced perceptual awareness and test the Wasserstein distance (WSD), Jensen-Shannon divergence (JSD), and symmetric Kullback-Leibler divergence (SKLD) measures when comparing deep features acquired from various pretrained deep networks, including the Visual Geometry Group (VGG) network, SqueezeNet, MobileNet, and EfficientNet. The proposed FR-IQA models exhibit superior alignment with subjective human evaluations across diverse image quality assessment (IQA) datasets without training, demonstrating the advanced perceptual relevance of distribution measures when comparing deep network features. Additionally, we explore the applicability of deep distribution measures in image super-resolution enhancement tasks, highlighting their potential for guiding perceptual enhancements. The code is available on website. (https://github.com/Buka-Xing/Deep-network-based-distribution-measures-for-full-reference-image-quality-assessment).
本研究旨在开发基于深度神经网络的高级免训练全参考图像质量评估(FR-IQA)模型。具体来说,我们研究了能让我们从感知上比较深度网络特征并揭示其潜在因素的测量方法。我们发现,在比较从各种预训练深度网络(包括 Visual Geometry Group (VGG) 网络、SqueezeNet、MobileNet 和 EfficientNet)获取的深度特征时,分布测量法具有先进的感知意识,并测试了 Wasserstein distance (WSD)、Jensen-Shannon divergence (JSD) 和 symmetric Kullback-Leibler divergence (SKLD) 测量法。在各种图像质量评估(IQA)数据集上,拟议的 FR-IQA 模型无需训练即可显示出与人类主观评价的卓越一致性,这表明在比较深度网络特征时,分布测量具有先进的感知相关性。此外,我们还探索了深度分布测量在图像超分辨率增强任务中的适用性,强调了它们在指导感知增强方面的潜力。代码可在网站上获取。(https://github.com/Buka-Xing/Deep-network-based-distribution-measures-for-full-reference-image-quality-assessment)。
{"title":"Image Quality Assessment: Measuring Perceptual Degradation via Distribution Measures in Deep Feature Spaces","authors":"Xingran Liao;Xuekai Wei;Mingliang Zhou;Zhengguo Li;Sam Kwong","doi":"10.1109/TIP.2024.3409176","DOIUrl":"10.1109/TIP.2024.3409176","url":null,"abstract":"This study aims to develop advanced and training-free full-reference image quality assessment (FR-IQA) models based on deep neural networks. Specifically, we investigate measures that allow us to perceptually compare deep network features and reveal their underlying factors. We find that distribution measures enjoy advanced perceptual awareness and test the Wasserstein distance (WSD), Jensen-Shannon divergence (JSD), and symmetric Kullback-Leibler divergence (SKLD) measures when comparing deep features acquired from various pretrained deep networks, including the Visual Geometry Group (VGG) network, SqueezeNet, MobileNet, and EfficientNet. The proposed FR-IQA models exhibit superior alignment with subjective human evaluations across diverse image quality assessment (IQA) datasets without training, demonstrating the advanced perceptual relevance of distribution measures when comparing deep network features. Additionally, we explore the applicability of deep distribution measures in image super-resolution enhancement tasks, highlighting their potential for guiding perceptual enhancements. The code is available on website. (\u0000<uri>https://github.com/Buka-Xing/Deep-network-based-distribution-measures-for-full-reference-image-quality-assessment</uri>\u0000).","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141463097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging 利用神经优化传输算法进行单受试者深度学习图像重建,用于 PET 双能量 CT 成像
Siqi Li;Yansong Zhu;Benjamin A. Spencer;Guobao Wang
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV $gamma $ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
将双能计算机断层扫描(DECT)与正电子发射断层扫描(PET)相结合可提供许多潜在的临床应用,但通常需要昂贵的硬件升级,或由于额外的 X 射线 CT 扫描而增加 PET/CT 扫描仪的辐射剂量。最近推出的正电子发射计算机断层成像(PET-enabled DECT)方法可在 PET/CT 上进行 DECT 成像,而无需进行第二次 X 射线 CT 扫描。它将已有的 X 射线 CT 图像与根据飞行时间 PET 发射数据重建的 511 keV 伽玛射线 CT(gCT)图像相结合。重建 gCT 图像的核化框架已经开发出来,但这种方法没有充分利用先验知识的潜力。在这一应用中,使用深度神经网络可以探索深度学习的力量。然而,常见的方法需要一个大型数据库来进行训练,这对于像 PET-enabled DECT 这样的新成像方法来说是不切实际的。在此,我们提出了一种单受试者方法,即在改进 gCT 图像重建之前使用神经网络表示作为深度系数,而无需基于群体的预训练。由此产生的优化问题是根据 gCT 投影数据对非线性神经网络参数进行断层估计。利用二次代换的优化转移策略,可以有效解决这一复杂问题。建议的神经优化转移算法的每次迭代包括PET 活动图像更新;gCT 图像更新;以及 gCT 图像域中的最小二乘法神经网络学习。该算法保证了数据可能性的单调增加。计算机模拟、真实模型数据和真实患者数据的结果表明,与其他方法相比,所提出的方法能显著提高 gCT 图像质量,并能随之进行多材料分解。
{"title":"Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging","authors":"Siqi Li;Yansong Zhu;Benjamin A. Spencer;Guobao Wang","doi":"10.1109/TIP.2024.3418347","DOIUrl":"10.1109/TIP.2024.3418347","url":null,"abstract":"Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV \u0000<inline-formula> <tex-math>$gamma $ </tex-math></inline-formula>\u0000-ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141462952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
5-D Epanechnikov Mixture-of-Experts in Light Field Image Compression 光场图像压缩中的 5-D Epanechnikov 专家混合物
Boning Liu;Yan Zhao;Xiaomeng Jiang;Xingguang Ji;Shigang Wang;Yebin Liu;Jian Wei
In this study, we propose a modeling-based compression approach for dense/lenslet light field images captured by Plenoptic 2.0 with square microlenses. This method employs the 5-D Epanechnikov Kernel (5-D EK) and its associated theories. Owing to the limitations of modeling larger image block using the Epanechnikov Mixture Regression (EMR), a 5-D Epanechnikov Mixture-of-Experts using Gaussian Initialization (5-D EMoE-GI) is proposed. This approach outperforms 5-D Gaussian Mixture Regression (5-D GMR). The modeling aspect of our coding framework utilizes the entire EI and the 5D Adaptive Model Selection (5-D AMLS) algorithm. The experimental results demonstrate that the decoded rendered images produced by our method are perceptually superior, outperforming High Efficiency Video Coding (HEVC) and JPEG 2000 at a bit depth below 0.06bpp.
在这项研究中,我们提出了一种基于建模的压缩方法,用于压缩由带有方形微透镜的 Plenoptic 2.0 拍摄的密集/小透镜光场图像。该方法采用了 5-D Epanechnikov Kernel(5-D EK)及其相关理论。由于使用埃帕尼奇尼科夫混合回归(EMR)对较大图像块建模的局限性,提出了使用高斯初始化的 5-D 埃帕尼奇尼科夫专家混合(5-D EMoE-GI)。这种方法优于 5-D 高斯混合回归(5-D GMR)。我们的编码框架在建模方面利用了整个 EI 和 5-D 自适应模型选择(5-D AMLS)算法。实验结果表明,我们的方法所生成的解码渲染图像具有卓越的感知性能,在比特深度低于 0.06bpp 时,其性能优于高效视频编码(HEVC)和 JPEG 2000。
{"title":"5-D Epanechnikov Mixture-of-Experts in Light Field Image Compression","authors":"Boning Liu;Yan Zhao;Xiaomeng Jiang;Xingguang Ji;Shigang Wang;Yebin Liu;Jian Wei","doi":"10.1109/TIP.2024.3418350","DOIUrl":"10.1109/TIP.2024.3418350","url":null,"abstract":"In this study, we propose a modeling-based compression approach for dense/lenslet light field images captured by Plenoptic 2.0 with square microlenses. This method employs the 5-D Epanechnikov Kernel (5-D EK) and its associated theories. Owing to the limitations of modeling larger image block using the Epanechnikov Mixture Regression (EMR), a 5-D Epanechnikov Mixture-of-Experts using Gaussian Initialization (5-D EMoE-GI) is proposed. This approach outperforms 5-D Gaussian Mixture Regression (5-D GMR). The modeling aspect of our coding framework utilizes the entire EI and the 5D Adaptive Model Selection (5-D AMLS) algorithm. The experimental results demonstrate that the decoded rendered images produced by our method are perceptually superior, outperforming High Efficiency Video Coding (HEVC) and JPEG 2000 at a bit depth below 0.06bpp.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141462743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Siamese-DETR for Generic Multi-Object Tracking 用于通用多目标跟踪的 Siamese-DETR。
Qiankun Liu;Yichen Li;Yuqi Jiang;Ying Fu
The ability to detect and track the dynamic objects in different scenes is fundamental to real-world applications, e.g., autonomous driving and robot navigation. However, traditional Multi-Object Tracking (MOT) is limited to track objects belonging to the pre-defined closed-set categories. Recently, Generic MOT (GMOT) is proposed to track interested objects beyond pre-defined categories and it can be divided into Open-Vocabulary MOT (OVMOT) and Template-Image-based MOT (TIMOT). Taking the consideration that the expensive well pre-trained (vision-)language model and fine-grained category annotations are required to train OVMOT models, in this paper, we focus on TIMOT and propose a simple but effective method, Siamese-DETR. Only the commonly used detection datasets (e.g., COCO) are required for training. Different from existing TIMOT methods, which train a Single Object Tracking (SOT) based detector to detect interested objects and then apply a data association based MOT tracker to get the trajectories, we leverage the inherent object queries in DETR variants. Specifically: 1) The multi-scale object queries are designed based on the given template image, which are effective for detecting different scales of objects with the same category as the template image; 2) A dynamic matching training strategy is introduced to train Siamese-DETR on commonly used detection datasets, which takes full advantage of provided annotations; 3) The online tracking pipeline is simplified through a tracking-by-query manner by incorporating the tracked boxes in the previous frame as additional query boxes. The complex data association is replaced with the much simpler Non-Maximum Suppression (NMS). Extensive experimental results show that Siamese-DETR surpasses existing MOT methods on GMOT-40 dataset by a large margin.
检测和跟踪不同场景中动态物体的能力是现实世界应用(如自动驾驶和机器人导航)的基础。然而,传统的多物体跟踪(MOT)仅限于跟踪属于预定义封闭集类别的物体。最近,通用多目标跟踪技术(GMOT)被提出来用于跟踪预定义类别之外的感兴趣的物体,它可分为开放词汇多目标跟踪技术(OVMOT)和基于模板图像的多目标跟踪技术(TIMOT)。考虑到训练 OVMOT 模型需要昂贵的预训练(视觉)语言模型和细粒度类别注释,本文将重点关注 TIMOT,并提出一种简单而有效的方法--Siamese-DETR。训练只需要常用的检测数据集(如 COCO)。现有的 TIMOT 方法是先训练基于单物体跟踪(SOT)的检测器来检测感兴趣的物体,然后应用基于数据关联的 MOT 跟踪器来获取轨迹,与此不同的是,我们利用了 DETR 变体中固有的物体查询功能。具体来说1)根据给定的模板图像设计多尺度物体查询,可有效检测与模板图像类别相同的不同尺度物体;2)引入动态匹配训练策略,在常用检测数据集上训练 Siamese-DETR,充分利用所提供的注释;3)通过将上一帧中的跟踪框作为附加查询框,以逐个查询的方式简化在线跟踪管道。复杂的数据关联被简单得多的非最大值抑制(NMS)所取代。大量实验结果表明,在 GMOT-40 数据集上,Siamese-DETR 远远超过了现有的 MOT 方法。
{"title":"Siamese-DETR for Generic Multi-Object Tracking","authors":"Qiankun Liu;Yichen Li;Yuqi Jiang;Ying Fu","doi":"10.1109/TIP.2024.3416880","DOIUrl":"10.1109/TIP.2024.3416880","url":null,"abstract":"The ability to detect and track the dynamic objects in different scenes is fundamental to real-world applications, e.g., autonomous driving and robot navigation. However, traditional Multi-Object Tracking (MOT) is limited to track objects belonging to the pre-defined closed-set categories. Recently, Generic MOT (GMOT) is proposed to track interested objects beyond pre-defined categories and it can be divided into Open-Vocabulary MOT (OVMOT) and Template-Image-based MOT (TIMOT). Taking the consideration that the expensive well pre-trained (vision-)language model and fine-grained category annotations are required to train OVMOT models, in this paper, we focus on TIMOT and propose a simple but effective method, Siamese-DETR. Only the commonly used detection datasets (e.g., COCO) are required for training. Different from existing TIMOT methods, which train a Single Object Tracking (SOT) based detector to detect interested objects and then apply a data association based MOT tracker to get the trajectories, we leverage the inherent object queries in DETR variants. Specifically: 1) The multi-scale object queries are designed based on the given template image, which are effective for detecting different scales of objects with the same category as the template image; 2) A dynamic matching training strategy is introduced to train Siamese-DETR on commonly used detection datasets, which takes full advantage of provided annotations; 3) The online tracking pipeline is simplified through a tracking-by-query manner by incorporating the tracked boxes in the previous frame as additional query boxes. The complex data association is replaced with the much simpler Non-Maximum Suppression (NMS). Extensive experimental results show that Siamese-DETR surpasses existing MOT methods on GMOT-40 dataset by a large margin.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Representation Learning With Spatial-Temporal Consistency for Sign Language Recognition 用于手语识别的时空一致性自我监督表征学习
Weichao Zhao;Wengang Zhou;Hezhen Hu;Min Wang;Houqiang Li
Recently, there have been efforts to improve the performance in sign language recognition by designing self-supervised learning methods. However, these methods capture limited information from sign pose data in a frame-wise learning manner, leading to sub-optimal solutions. To this end, we propose a simple yet effective self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency from two distinct perspectives and learn instance discriminative representation for sign language recognition. On one hand, since the semantics of sign language are expressed by the cooperation of fine-grained hands and coarse-grained trunks, we utilize both granularity information and encode them into latent spaces. The consistency between hand and trunk features is constrained to encourage learning consistent representation of instance samples. On the other hand, inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling. Additionally, we further bridge the interaction between the embedding spaces of both modalities, facilitating bidirectional knowledge transfer to enhance sign language representation. Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin. The source code is publicly available at https://github.com/sakura/Code.
最近,人们一直在努力通过设计自我监督学习方法来提高手语识别的性能。然而,这些方法以框架学习的方式从手势数据中捕捉到的信息有限,导致了次优解决方案的出现。为此,我们提出了一种简单而有效的自监督对比学习框架,通过时空一致性从两个不同的角度挖掘丰富的语境,并学习用于手语识别的实例判别表征。一方面,由于手语的语义是通过细粒度的手和粗粒度的躯干的合作来表达的,因此我们利用这两种粒度信息并将其编码到潜在空间中。手部和躯干特征之间的一致性受到限制,以鼓励学习一致的实例样本表示。另一方面,受运动和关节模态互补特性的启发,我们首次在手语建模中引入了一阶运动信息。此外,我们还进一步弥合了两种模态的嵌入空间之间的相互作用,促进了知识的双向传递,从而增强了手语表征能力。我们的方法在四个公共基准上进行了广泛的实验评估,并以显著的优势达到了新的一流性能。源代码可通过 https://github.com/sakura/Code 公开获取。
{"title":"Self-Supervised Representation Learning With Spatial-Temporal Consistency for Sign Language Recognition","authors":"Weichao Zhao;Wengang Zhou;Hezhen Hu;Min Wang;Houqiang Li","doi":"10.1109/TIP.2024.3416881","DOIUrl":"10.1109/TIP.2024.3416881","url":null,"abstract":"Recently, there have been efforts to improve the performance in sign language recognition by designing self-supervised learning methods. However, these methods capture limited information from sign pose data in a frame-wise learning manner, leading to sub-optimal solutions. To this end, we propose a simple yet effective self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency from two distinct perspectives and learn instance discriminative representation for sign language recognition. On one hand, since the semantics of sign language are expressed by the cooperation of fine-grained hands and coarse-grained trunks, we utilize both granularity information and encode them into latent spaces. The consistency between hand and trunk features is constrained to encourage learning consistent representation of instance samples. On the other hand, inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling. Additionally, we further bridge the interaction between the embedding spaces of both modalities, facilitating bidirectional knowledge transfer to enhance sign language representation. Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin. The source code is publicly available at \u0000<uri>https://github.com/sakura/Code</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1