首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
How is Visual Attention Influenced by Text Guidance? Database and Model 文字引导如何影响视觉注意力?数据库和模型。
Yinan Sun;Xiongkuo Min;Huiyu Duan;Guangtao Zhai
The analysis and prediction of visual attention have long been crucial tasks in the fields of computer vision and image processing. In practical applications, images are generally accompanied by various text descriptions, however, few studies have explored the influence of text descriptions on visual attention, let alone developed visual saliency prediction models considering text guidance. In this paper, we conduct a comprehensive study on text-guided image saliency (TIS) from both subjective and objective perspectives. Specifically, we construct a TIS database named SJTU-TIS, which includes 1200 text-image pairs and the corresponding collected eye-tracking data. Based on the established SJTU-TIS database, we analyze the influence of various text descriptions on visual attention. Then, to facilitate the development of saliency prediction models considering text influence, we construct a benchmark for the established SJTU-TIS database using state-of-the-art saliency models. Finally, considering the effect of text descriptions on visual attention, while most existing saliency models ignore this impact, we further propose a text-guided saliency (TGSal) prediction model, which extracts and integrates both image features and text features to predict the image saliency under various text-description conditions. Our proposed model significantly outperforms the state-of-the-art saliency models on both the SJTU-TIS database and the pure image saliency databases in terms of various evaluation metrics. The SJTU-TIS database and the code of the proposed TGSal model will be released at: https://github.com/IntMeGroup/TGSal.
长期以来,视觉注意力的分析和预测一直是计算机视觉和图像处理领域的重要任务。在实际应用中,图像一般都会伴有各种文字描述,但很少有研究探讨文字描述对视觉注意力的影响,更不用说开发考虑文字引导的视觉突出预测模型了。在本文中,我们从主观和客观两个角度对文本引导的图像显著性(TIS)进行了全面研究。具体来说,我们构建了一个名为 SJTU-TIS 的 TIS 数据库,其中包括 1200 个文本-图像对和相应的眼动数据。基于已建立的 SJTU-TIS 数据库,我们分析了各种文字描述对视觉注意力的影响。然后,为了促进考虑文本影响的显著性预测模型的开发,我们使用最先进的显著性模型为已建立的 SJTU-TIS 数据库构建了一个基准。最后,考虑到文本描述对视觉注意力的影响,而大多数现有的显著性模型都忽略了这一影响,我们进一步提出了文本引导的显著性(TGSal)预测模型,该模型提取并整合了图像特征和文本特征,以预测各种文本描述条件下的图像显著性。在 SJTU-TIS 数据库和纯图像突出度数据库上,我们提出的模型在各种评价指标上都明显优于最先进的突出度模型。SJTU-TIS 数据库和 TGSal 模型的代码将在以下网站发布:https://github.com/IntMeGroup/TGSal。
{"title":"How is Visual Attention Influenced by Text Guidance? Database and Model","authors":"Yinan Sun;Xiongkuo Min;Huiyu Duan;Guangtao Zhai","doi":"10.1109/TIP.2024.3461956","DOIUrl":"10.1109/TIP.2024.3461956","url":null,"abstract":"The analysis and prediction of visual attention have long been crucial tasks in the fields of computer vision and image processing. In practical applications, images are generally accompanied by various text descriptions, however, few studies have explored the influence of text descriptions on visual attention, let alone developed visual saliency prediction models considering text guidance. In this paper, we conduct a comprehensive study on text-guided image saliency (TIS) from both subjective and objective perspectives. Specifically, we construct a TIS database named SJTU-TIS, which includes 1200 text-image pairs and the corresponding collected eye-tracking data. Based on the established SJTU-TIS database, we analyze the influence of various text descriptions on visual attention. Then, to facilitate the development of saliency prediction models considering text influence, we construct a benchmark for the established SJTU-TIS database using state-of-the-art saliency models. Finally, considering the effect of text descriptions on visual attention, while most existing saliency models ignore this impact, we further propose a text-guided saliency (TGSal) prediction model, which extracts and integrates both image features and text features to predict the image saliency under various text-description conditions. Our proposed model significantly outperforms the state-of-the-art saliency models on both the SJTU-TIS database and the pure image saliency databases in terms of various evaluation metrics. The SJTU-TIS database and the code of the proposed TGSal model will be released at: \u0000<uri>https://github.com/IntMeGroup/TGSal</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5392-5407"},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Pattern Partitioning for Multi-Pass Printing: PARAOMASKING 为多道印刷优化图案分区:PARAOMASKING.
Utpal Sarkar;Héctor Gómez;Ján Morovič;Peter Morovič
In halftone-driven imaging pipelines focus is often placed on halftone pattern design as the main contributor to overall output quality. However, for sequential or cumulative imaging technologies, such as multi-pass printing, an important element is also pattern partitioning – how the overall halftone pattern is divided among the different partial imaging events such as printing passes. Partitioning is usually designed agnostically of the halftone pattern, making it impossible to optimize for the joint effect of halftone and partitioning. However, even a good halftone pattern coupled with a good partitioning scheme does not guarantee well partitioned halftones and can impact image quality attributes. In this paper a novel approach called PARAOMASKING is presented that benefits from the pattern-determinism of PARAWACS halftoning and proposes a partitioning scheme for multi-pass printing such that optimality is also obtained for partitioned halftones. Results – both digital and printed – show how it can lead to significant improvements in partial pattern quality and overall pattern quality. Consequently, output attributes such as grain, coalescence and pattern robustness are improved. The focus here is on blue-noise pattern preservation but the approach can also be extended to other objectives, e.g., maximizing per-pass clustering.
在半色调驱动的成像流水线中,半色调图案设计通常是影响整体输出质量的主要因素。然而,对于连续或累积成像技术(如多道印刷)而言,图案分割也是一个重要因素,即如何在不同的局部成像事件(如印刷道)中分割整体半色调图案。分区的设计通常与半色调图案无关,因此不可能对半色调和分区的共同效果进行优化。然而,即使有好的半色调图案和好的分区方案,也不能保证半色调分区良好,而且还会影响图像质量属性。本文提出了一种名为 PARAOMASKING 的新方法,它得益于 PARAWACS 半色调的图案确定性,并为多道印刷提出了一种分区方案,从而使分区半色调也能达到最佳效果。数字和印刷结果表明,它可以显著提高局部图案质量和整体图案质量。因此,颗粒、凝聚和图案稳健性等输出属性都得到了改善。这里的重点是蓝噪图案的保存,但该方法也可扩展到其他目标,例如最大化每次聚类。
{"title":"Optimized Pattern Partitioning for Multi-Pass Printing: PARAOMASKING","authors":"Utpal Sarkar;Héctor Gómez;Ján Morovič;Peter Morovič","doi":"10.1109/TIP.2024.3459611","DOIUrl":"10.1109/TIP.2024.3459611","url":null,"abstract":"In halftone-driven imaging pipelines focus is often placed on halftone pattern design as the main contributor to overall output quality. However, for sequential or cumulative imaging technologies, such as multi-pass printing, an important element is also pattern partitioning – how the overall halftone pattern is divided among the different partial imaging events such as printing passes. Partitioning is usually designed agnostically of the halftone pattern, making it impossible to optimize for the joint effect of halftone and partitioning. However, even a good halftone pattern coupled with a good partitioning scheme does not guarantee well partitioned halftones and can impact image quality attributes. In this paper a novel approach called PARAOMASKING is presented that benefits from the pattern-determinism of PARAWACS halftoning and proposes a partitioning scheme for multi-pass printing such that optimality is also obtained for partitioned halftones. Results – both digital and printed – show how it can lead to significant improvements in partial pattern quality and overall pattern quality. Consequently, output attributes such as grain, coalescence and pattern robustness are improved. The focus here is on blue-noise pattern preservation but the approach can also be extended to other objectives, e.g., maximizing per-pass clustering.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5382-5391"},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142275161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CrossDiff: Exploring Self-SupervisedRepresentation of Pansharpening via Cross-Predictive Diffusion Model 交叉扩散:通过交叉预测扩散模型探索泛锐化的自监督表征
Yinghui Xing;Litao Qu;Shizhou Zhang;Kai Zhang;Yanning Zhang;Lorenzo Bruzzone
Fusion of a panchromatic (PAN) image and corresponding multispectral (MS) image is also known as pansharpening, which aims to combine abundant spatial details of PAN and spectral information of MS images. Due to the absence of high-resolution MS images, available deep-learning-based methods usually follow the paradigm of training at reduced resolution and testing at both reduced and full resolution. When taking original MS and PAN images as inputs, they always obtain sub-optimal results due to the scale variation. In this paper, we propose to explore the self-supervised representation for pansharpening by designing a cross-predictive diffusion model, named CrossDiff. It has two-stage training. In the first stage, we introduce a cross-predictive pretext task to pre-train the UNet structure based on conditional Denoising Diffusion Probabilistic Model (DDPM). While in the second stage, the encoders of the UNets are frozen to directly extract spatial and spectral features from PAN and MS images, and only the fusion head is trained to adapt for pansharpening task. Extensive experiments show the effectiveness and superiority of the proposed model compared with state-of-the-art supervised and unsupervised methods. Besides, the cross-sensor experiments also verify the generalization ability of proposed self-supervised representation learners for other satellite datasets. Code is available at https://github.com/codgodtao/CrossDiff.
全色(PAN)图像与相应的多光谱(MS)图像的融合也被称为泛锐化,其目的是将全色(PAN)图像的丰富空间细节与多光谱(MS)图像的光谱信息结合起来。由于缺乏高分辨率的 MS 图像,现有的基于深度学习的方法通常采用降低分辨率进行训练、同时在降低分辨率和全分辨率下进行测试的模式。在将原始 MS 和 PAN 图像作为输入时,由于尺度的变化,它们总能获得次优结果。在本文中,我们建议通过设计一种名为 CrossDiff 的交叉预测扩散模型来探索用于平差处理的自监督表示方法。它有两个阶段的训练。在第一阶段,我们基于条件去噪扩散概率模型(DDPM),引入交叉预测借口任务来预训练 UNet 结构。而在第二阶段,UNet 的编码器被冻结,直接从 PAN 和 MS 图像中提取空间和光谱特征,只训练融合头以适应泛锐化任务。大量实验表明,与最先进的有监督和无监督方法相比,所提出的模型更加有效和优越。此外,跨传感器实验也验证了所提出的自监督表示学习器对其他卫星数据集的泛化能力。代码见 https://github.com/codgodtao/CrossDiff。
{"title":"CrossDiff: Exploring Self-SupervisedRepresentation of Pansharpening via Cross-Predictive Diffusion Model","authors":"Yinghui Xing;Litao Qu;Shizhou Zhang;Kai Zhang;Yanning Zhang;Lorenzo Bruzzone","doi":"10.1109/TIP.2024.3461476","DOIUrl":"10.1109/TIP.2024.3461476","url":null,"abstract":"Fusion of a panchromatic (PAN) image and corresponding multispectral (MS) image is also known as pansharpening, which aims to combine abundant spatial details of PAN and spectral information of MS images. Due to the absence of high-resolution MS images, available deep-learning-based methods usually follow the paradigm of training at reduced resolution and testing at both reduced and full resolution. When taking original MS and PAN images as inputs, they always obtain sub-optimal results due to the scale variation. In this paper, we propose to explore the self-supervised representation for pansharpening by designing a cross-predictive diffusion model, named CrossDiff. It has two-stage training. In the first stage, we introduce a cross-predictive pretext task to pre-train the UNet structure based on conditional Denoising Diffusion Probabilistic Model (DDPM). While in the second stage, the encoders of the UNets are frozen to directly extract spatial and spectral features from PAN and MS images, and only the fusion head is trained to adapt for pansharpening task. Extensive experiments show the effectiveness and superiority of the proposed model compared with state-of-the-art supervised and unsupervised methods. Besides, the cross-sensor experiments also verify the generalization ability of proposed self-supervised representation learners for other satellite datasets. Code is available at \u0000<uri>https://github.com/codgodtao/CrossDiff</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5496-5509"},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142275357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning 基于伪标记的实用半监督元训练--用于少量学习
Xingping Dong;Tianran Ouyang;Shengcai Liao;Bo Du;Ling Shao
Most existing few-shot learning (FSL) methods require a large amount of labeled data in meta-training, which is a major limit. To reduce the requirement of labels, a semi-supervised meta-training (SSMT) setting has been proposed for FSL, which includes only a few labeled samples and numbers of unlabeled samples in base classes. However, existing methods under this setting require class-aware sample selection from the unlabeled set, which violates the assumption of unlabeled set. In this paper, we propose a practical semi-supervised meta-training setting with truly unlabeled data to facilitate the applications of FSL in realistic scenarios. To better utilize both the labeled and truly unlabeled data, we propose a simple and effective meta-training framework, called pseudo-labeling based meta-learning (PLML). Firstly, we train a classifier via common semi-supervised learning (SSL) and use it to obtain the pseudo-labels of unlabeled data. Then we build few-shot tasks from labeled and pseudo-labeled data and design a novel finetuning method with feature smoothing and noise suppression to better learn the FSL model from noise labels. Surprisingly, through extensive experiments across two FSL datasets, we find that this simple meta-training framework effectively prevents the performance degradation of various FSL models under limited labeled data, and also significantly outperforms the representative SSMT models. Besides, benefiting from meta-training, our method also improves several representative SSL algorithms as well. We provide the training code and usage examples at https://github.com/ouyangtianran/PLML.
大多数现有的少量学习(FSL)方法在元训练中都需要大量标签数据,这是一个很大的限制。为了降低对标签的要求,有人为 FSL 提出了一种半监督元训练(SSMT)设置,其中只包括少量标签样本和基类中的大量未标签样本。然而,现有方法需要从未标明集合中选择类感知样本,这违反了未标明集合的假设。在本文中,我们提出了一种使用真正无标注数据的实用半监督元训练设置,以促进 FSL 在现实场景中的应用。为了更好地利用标注数据和真实无标注数据,我们提出了一种简单有效的元训练框架,即基于伪标注的元学习(PLML)。首先,我们通过普通的半监督学习(SSL)训练分类器,并利用它获得未标记数据的伪标签。然后,我们从标记数据和伪标记数据中建立少量任务,并设计出一种具有特征平滑和噪声抑制功能的新型微调方法,以便更好地从噪声标签中学习 FSL 模型。令人惊讶的是,通过在两个 FSL 数据集上的广泛实验,我们发现这种简单的元训练框架能有效防止各种 FSL 模型在有限标签数据下的性能下降,并显著优于具有代表性的 SSMT 模型。此外,受益于元训练,我们的方法还改进了几种具有代表性的 SSL 算法。我们在 https://github.com/ouyangtianran/PLML 网站上提供了训练代码和使用示例。
{"title":"Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning","authors":"Xingping Dong;Tianran Ouyang;Shengcai Liao;Bo Du;Ling Shao","doi":"10.1109/TIP.2024.3461472","DOIUrl":"10.1109/TIP.2024.3461472","url":null,"abstract":"Most existing few-shot learning (FSL) methods require a large amount of labeled data in meta-training, which is a major limit. To reduce the requirement of labels, a semi-supervised meta-training (SSMT) setting has been proposed for FSL, which includes only a few labeled samples and numbers of unlabeled samples in base classes. However, existing methods under this setting require class-aware sample selection from the unlabeled set, which violates the assumption of unlabeled set. In this paper, we propose a practical semi-supervised meta-training setting with truly unlabeled data to facilitate the applications of FSL in realistic scenarios. To better utilize both the labeled and truly unlabeled data, we propose a simple and effective meta-training framework, called pseudo-labeling based meta-learning (PLML). Firstly, we train a classifier via common semi-supervised learning (SSL) and use it to obtain the pseudo-labels of unlabeled data. Then we build few-shot tasks from labeled and pseudo-labeled data and design a novel finetuning method with feature smoothing and noise suppression to better learn the FSL model from noise labels. Surprisingly, through extensive experiments across two FSL datasets, we find that this simple meta-training framework effectively prevents the performance degradation of various FSL models under limited labeled data, and also significantly outperforms the representative SSMT models. Besides, benefiting from meta-training, our method also improves several representative SSL algorithms as well. We provide the training code and usage examples at \u0000<uri>https://github.com/ouyangtianran/PLML</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5663-5675"},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142275354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Spectral Prior for Hyperspectral Image Super-Resolution 探索高光谱图像超分辨率的光谱先验。
Qian Hu;Xinya Wang;Junjun Jiang;Xiao-Ping Zhang;Jiayi Ma
In recent years, many single hyperspectral image super-resolution methods have emerged to enhance the spatial resolution of hyperspectral images without hardware modification. However, existing methods typically face two significant challenges. First, they struggle to handle the high-dimensional nature of hyperspectral data, which often results in high computational complexity and inefficient information utilization. Second, they have not fully leveraged the abundant spectral information in hyperspectral images. To address these challenges, we propose a novel hyperspectral super-resolution network named SNLSR, which transfers the super-resolution problem into the abundance domain. Our SNLSR leverages a spatial preserve decomposition network to estimate the abundance representations of the input hyperspectral image. Notably, the network acknowledges and utilizes the commonly overlooked spatial correlations of hyperspectral images, leading to better reconstruction performance. Then, the estimated low-resolution abundance is super-resolved through a spatial spectral attention network, where the informative features from both spatial and spectral domains are fully exploited. Considering that the hyperspectral image is highly spectrally correlated, we customize a spectral-wise non-local attention module to mine similar pixels along spectral dimension for high-frequency detail recovery. Extensive experiments demonstrate the superiority of our method over other state-of-the-art methods both visually and metrically. Our code is publicly available at https://github.com/HuQ1an/SNLSR.
近年来,出现了许多单幅高光谱图像超分辨率方法,这些方法可以在不修改硬件的情况下提高高光谱图像的空间分辨率。然而,现有方法通常面临两个重大挑战。首先,它们难以处理高光谱数据的高维特性,这往往导致计算复杂度高和信息利用效率低。其次,它们没有充分利用高光谱图像中丰富的光谱信息。为了应对这些挑战,我们提出了一种名为 SNLSR 的新型高光谱超分辨率网络,它将超分辨率问题转移到了丰度域。我们的 SNLSR 利用空间保留分解网络来估计输入高光谱图像的丰度表示。值得注意的是,该网络承认并利用了通常被忽视的高光谱图像的空间相关性,从而提高了重建性能。然后,通过空间光谱注意力网络对估计的低分辨率丰度进行超分辨率处理,从而充分利用空间和光谱域的信息特征。考虑到高光谱图像在光谱上高度相关,我们定制了一个光谱非局部关注模块,以沿光谱维度挖掘相似像素,从而实现高频细节恢复。大量实验证明,我们的方法在视觉和度量方面都优于其他最先进的方法。
{"title":"Exploring the Spectral Prior for Hyperspectral Image Super-Resolution","authors":"Qian Hu;Xinya Wang;Junjun Jiang;Xiao-Ping Zhang;Jiayi Ma","doi":"10.1109/TIP.2024.3460470","DOIUrl":"10.1109/TIP.2024.3460470","url":null,"abstract":"In recent years, many single hyperspectral image super-resolution methods have emerged to enhance the spatial resolution of hyperspectral images without hardware modification. However, existing methods typically face two significant challenges. First, they struggle to handle the high-dimensional nature of hyperspectral data, which often results in high computational complexity and inefficient information utilization. Second, they have not fully leveraged the abundant spectral information in hyperspectral images. To address these challenges, we propose a novel hyperspectral super-resolution network named SNLSR, which transfers the super-resolution problem into the abundance domain. Our SNLSR leverages a spatial preserve decomposition network to estimate the abundance representations of the input hyperspectral image. Notably, the network acknowledges and utilizes the commonly overlooked spatial correlations of hyperspectral images, leading to better reconstruction performance. Then, the estimated low-resolution abundance is super-resolved through a spatial spectral attention network, where the informative features from both spatial and spectral domains are fully exploited. Considering that the hyperspectral image is highly spectrally correlated, we customize a spectral-wise non-local attention module to mine similar pixels along spectral dimension for high-frequency detail recovery. Extensive experiments demonstrate the superiority of our method over other state-of-the-art methods both visually and metrically. Our code is publicly available at \u0000<uri>https://github.com/HuQ1an/SNLSR</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5260-5272"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142273380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Consensus Anchor Learning for Fast Multi-View Clustering 用于快速多视角聚类的双共识锚点学习
Yalan Qin;Chuan Qin;Xinpeng Zhang;Guorui Feng
Multi-view clustering usually attempts to improve the final performance by integrating graph structure information from different views and methods based on anchor are presented to reduce the computation cost for datasets with large scales. Despite significant progress, these methods pay few attentions to ensuring that the cluster structure correspondence between anchor graph and partition is built on multi-view datasets. Besides, they ignore to discover the anchor graph depicting the shared cluster assignment across views under the orthogonal constraint on actual bases in factorization. In this paper, we propose a novel Dual consensus Anchor Learning for Fast multi-view clustering (DALF) method, where the cluster structure correspondence between anchor graph and partition is guaranteed on multi-view datasets with large scales. It jointly learns anchors, constructs anchor graph and performs partition under a unified framework with the rank constraint imposed on the built Laplacian graph and the orthogonal constraint on the centroid representation. DALF simultaneously focuses on the cluster structure in the anchor graph and partition. The final cluster structure is simultaneously shown in the anchor graph and partition. We introduce the orthogonal constraint on the centroid representation in anchor graph factorization and the cluster assignment is directly constructed, where the cluster structure is shown in the partition. We present an iterative algorithm for solving the formulated problem. Extensive experiments demonstrate the effectiveness and efficiency of DALF on different multi-view datasets compared with other methods.
多视图聚类通常试图通过整合来自不同视图的图结构信息来提高最终性能,并提出了基于锚的方法来降低大规模数据集的计算成本。尽管取得了重大进展,但这些方法很少关注如何确保在多视图数据集上建立锚图与分区之间的聚类结构对应关系。此外,这些方法忽视了在因式分解的实际基数正交约束下发现描述跨视图共享聚类分配的锚图。在本文中,我们提出了一种新颖的用于快速多视图聚类的双共识锚点学习(Dual consensus Anchor Learning for Fast multi-view clustering,DALF)方法,在这种方法中,锚点图和分区之间的聚类结构对应关系在大尺度多视图数据集上得到了保证。它在统一的框架下联合学习锚点、构建锚点图并执行分区,对构建的拉普拉斯图施加秩约束,对中心点表示施加正交约束。DALF 同时关注锚图和分区中的聚类结构。最终的聚类结构同时显示在锚图和分区中。我们在锚图因式分解中引入了对中心点表示的正交约束,并直接构建了聚类分配,聚类结构显示在分区中。我们提出了解决所提问题的迭代算法。大量实验证明,与其他方法相比,DALF 在不同多视图数据集上的有效性和高效性。
{"title":"Dual Consensus Anchor Learning for Fast Multi-View Clustering","authors":"Yalan Qin;Chuan Qin;Xinpeng Zhang;Guorui Feng","doi":"10.1109/TIP.2024.3459651","DOIUrl":"10.1109/TIP.2024.3459651","url":null,"abstract":"Multi-view clustering usually attempts to improve the final performance by integrating graph structure information from different views and methods based on anchor are presented to reduce the computation cost for datasets with large scales. Despite significant progress, these methods pay few attentions to ensuring that the cluster structure correspondence between anchor graph and partition is built on multi-view datasets. Besides, they ignore to discover the anchor graph depicting the shared cluster assignment across views under the orthogonal constraint on actual bases in factorization. In this paper, we propose a novel Dual consensus Anchor Learning for Fast multi-view clustering (DALF) method, where the cluster structure correspondence between anchor graph and partition is guaranteed on multi-view datasets with large scales. It jointly learns anchors, constructs anchor graph and performs partition under a unified framework with the rank constraint imposed on the built Laplacian graph and the orthogonal constraint on the centroid representation. DALF simultaneously focuses on the cluster structure in the anchor graph and partition. The final cluster structure is simultaneously shown in the anchor graph and partition. We introduce the orthogonal constraint on the centroid representation in anchor graph factorization and the cluster assignment is directly constructed, where the cluster structure is shown in the partition. We present an iterative algorithm for solving the formulated problem. Extensive experiments demonstrate the effectiveness and efficiency of DALF on different multi-view datasets compared with other methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5298-5311"},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIM-OFE: Structure Information Mining and Object-Aware Feature Enhancement for Fine-Grained Visual Categorization SIM-OFE:用于细粒度视觉分类的结构信息挖掘和对象感知特征增强技术
Hongbo Sun;Xiangteng He;Jinglin Xu;Yuxin Peng
Fine-grained visual categorization (FGVC) aims to distinguish visual objects from multiple subcategories of the coarse-grained category. Subtle inter-class differences among various subcategories make the FGVC task more challenging. Existing methods primarily focus on learning salient visual patterns while ignoring how to capture the object’s internal structure, causing difficulty in obtaining complete discriminative regions within the object to limit FGVC performance. To address the above issue, we propose a Structure Information Mining and Object-aware Feature Enhancement (SIM-OFE) method for fine-grained visual categorization, which explores the visual object’s internal structure composition and appearance traits. Concretely, we first propose a simple yet effective hybrid perception attention module for locating visual objects based on global-scope and local-scope significance analyses. Then, a structure information mining module is proposed to model the distribution and context relation of critical regions within the object, highlighting the whole object and discriminative regions for distinguishing subtle differences. Finally, an object-aware feature enhancement module is proposed to combine global-scope and local-scope discriminative features in an attentive coupling way for powerful visual representations in fine-grained recognition. Extensive experiments on three FGVC benchmark datasets demonstrate that our proposed SIM-OFE method can achieve state-of-the-art performance.
细粒度视觉分类(FGVC)旨在将视觉对象从粗粒度类别的多个子类别中区分出来。不同子类别之间微妙的类间差异使 FGVC 任务更具挑战性。现有方法主要侧重于学习突出的视觉模式,而忽略了如何捕捉对象的内部结构,导致难以获得对象内部完整的分辨区域,从而限制了 FGVC 的性能。针对上述问题,我们提出了一种用于细粒度视觉分类的结构信息挖掘和对象感知特征增强(SIM-OFE)方法,该方法可以挖掘视觉对象的内部结构组成和外观特征。具体来说,我们首先提出了一个简单而有效的混合感知注意力模块,用于基于全局范围和局部范围的重要性分析来定位视觉对象。然后,我们提出了一个结构信息挖掘模块,对物体内部关键区域的分布和上下文关系进行建模,突出整个物体和用于区分细微差别的鉴别区域。最后,我们还提出了一个对象感知特征增强模块,以细心耦合的方式将全局范围和局部范围的判别特征结合起来,从而在细粒度识别中实现强大的视觉表征。在三个 FGVC 基准数据集上进行的广泛实验证明,我们提出的 SIM-OFE 方法可以达到最先进的性能。
{"title":"SIM-OFE: Structure Information Mining and Object-Aware Feature Enhancement for Fine-Grained Visual Categorization","authors":"Hongbo Sun;Xiangteng He;Jinglin Xu;Yuxin Peng","doi":"10.1109/TIP.2024.3459788","DOIUrl":"10.1109/TIP.2024.3459788","url":null,"abstract":"Fine-grained visual categorization (FGVC) aims to distinguish visual objects from multiple subcategories of the coarse-grained category. Subtle inter-class differences among various subcategories make the FGVC task more challenging. Existing methods primarily focus on learning salient visual patterns while ignoring how to capture the object’s internal structure, causing difficulty in obtaining complete discriminative regions within the object to limit FGVC performance. To address the above issue, we propose a Structure Information Mining and Object-aware Feature Enhancement (SIM-OFE) method for fine-grained visual categorization, which explores the visual object’s internal structure composition and appearance traits. Concretely, we first propose a simple yet effective hybrid perception attention module for locating visual objects based on global-scope and local-scope significance analyses. Then, a structure information mining module is proposed to model the distribution and context relation of critical regions within the object, highlighting the whole object and discriminative regions for distinguishing subtle differences. Finally, an object-aware feature enhancement module is proposed to combine global-scope and local-scope discriminative features in an attentive coupling way for powerful visual representations in fine-grained recognition. Extensive experiments on three FGVC benchmark datasets demonstrate that our proposed SIM-OFE method can achieve state-of-the-art performance.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5312-5326"},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Transferable Conceptual Prototypes for Interpretable Unsupervised Domain Adaptation 学习可迁移的概念原型,实现可解释的无监督领域适应性
Junyu Gao;Xinhong Ma;Changsheng Xu
Despite the great progress of unsupervised domain adaptation (UDA) with the deep neural networks, current UDA models are opaque and cannot provide promising explanations, limiting their applications in the scenarios that require safe and controllable model decisions. At present, a surge of work focuses on designing deep interpretable methods with adequate data annotations and only a few methods consider the distributional shift problem. Most existing interpretable UDA methods are post-hoc ones, which cannot facilitate the model learning process for performance enhancement. In this paper, we propose an inherently interpretable method, named Transferable Conceptual Prototype Learning (TCPL), which could simultaneously interpret and improve the processes of knowledge transfer and decision-making in UDA. To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process. With the learned transferable prototypes, a self-predictive consistent pseudo-label strategy that fuses confidence, predictions, and prototype information, is designed for selecting suitable target samples for pseudo annotations and gradually narrowing down the domain gap. Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts. Code is available at https://drive.google.com/file/d/1b1EHFghiF1ExD-Cn1HYg75VutfkXWp60/view?usp=sharing.
尽管深度神经网络在无监督领域适应(UDA)方面取得了巨大进步,但目前的 UDA 模型并不透明,无法提供有前景的解释,限制了其在需要安全可控的模型决策场景中的应用。目前,大量工作都集中在设计具有充分数据注释的深度可解释方法上,只有少数方法考虑了分布转移问题。大多数现有的可解释 UDA 方法都是事后方法,无法促进模型学习过程以提高性能。在本文中,我们提出了一种名为 "可迁移概念原型学习"(TCPL)的内在可解释方法,它可以同时解释和改进 UDA 中的知识迁移和决策过程。为了实现这一目标,我们设计了一个分层原型模块,将分类基本概念从源领域转移到目标领域,并学习领域共享原型来解释基本推理过程。利用学习到的可转移原型,我们设计了一种融合置信度、预测和原型信息的自预测一致性伪标注策略,用于选择合适的目标样本进行伪标注,并逐步缩小领域差距。综合实验表明,所提出的方法不仅能提供有效、直观的解释,而且性能优于以往的先进技术。代码见 https://drive.google.com/file/d/1b1EHFghiF1ExD-Cn1HYg75VutfkXWp60/view?usp=sharing。
{"title":"Learning Transferable Conceptual Prototypes for Interpretable Unsupervised Domain Adaptation","authors":"Junyu Gao;Xinhong Ma;Changsheng Xu","doi":"10.1109/TIP.2024.3459626","DOIUrl":"10.1109/TIP.2024.3459626","url":null,"abstract":"Despite the great progress of unsupervised domain adaptation (UDA) with the deep neural networks, current UDA models are opaque and cannot provide promising explanations, limiting their applications in the scenarios that require safe and controllable model decisions. At present, a surge of work focuses on designing deep interpretable methods with adequate data annotations and only a few methods consider the distributional shift problem. Most existing interpretable UDA methods are post-hoc ones, which cannot facilitate the model learning process for performance enhancement. In this paper, we propose an inherently interpretable method, named Transferable Conceptual Prototype Learning (TCPL), which could simultaneously interpret and improve the processes of knowledge transfer and decision-making in UDA. To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process. With the learned transferable prototypes, a self-predictive consistent pseudo-label strategy that fuses confidence, predictions, and prototype information, is designed for selecting suitable target samples for pseudo annotations and gradually narrowing down the domain gap. Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts. Code is available at \u0000<uri>https://drive.google.com/file/d/1b1EHFghiF1ExD-Cn1HYg75VutfkXWp60/view?usp=sharing</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5284-5297"},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Boost Zero-Shot Generalization for Embodied Reasoning With Vision-Language Pre-Training 通过视觉语言预训练提升嵌入式推理的零点泛化能力
Ke Su;Xingxing Zhang;Siyang Zhang;Jun Zhu;Bo Zhang
Recently, there exists an increased research interest in embodied artificial intelligence (EAI), which involves an agent learning to perform a specific task when dynamically interacting with the surrounding 3D environment. There into, a new challenge is that many unseen objects may appear due to the increased number of object categories in 3D scenes. It makes developing models with strong zero-shot generalization ability to new objects necessary. Existing work tries to achieve this goal by providing embodied agents with massive high-quality human annotations closely related to the task to be learned, while it is too costly in practice. Inspired by recent advances in pre-trained models in 2D visual tasks, we attempt to boost zero-shot generalization for embodied reasoning with vision-language pre-training that can encode common sense as general prior knowledge. To further improve its performance on a specific task, we rectify the pre-trained representation through masked scene graph modeling (MSGM) in a self-supervised manner, where the task-specific knowledge is learned from iterative message passing. Our method can improve a variety of representative embodied reasoning tasks by a large margin (e.g., over 5.0% w.r.t. answer accuracy on MP3D-EQA dataset that consists of many real-world scenes with a large number of new objects during testing), and achieve the new state-of-the-art performance.
最近,人们对嵌入式人工智能(EAI)的研究兴趣日益浓厚,这涉及到代理在与周围三维环境进行动态交互时学习执行特定任务。其中,一个新的挑战是,由于三维场景中物体类别的增加,可能会出现许多未见物体。因此,有必要针对新物体开发具有强大零点泛化能力的模型。现有的工作试图通过提供与要学习的任务密切相关的大量高质量人类注释来实现这一目标,但在实践中成本太高。受二维视觉任务中预训练模型最新进展的启发,我们尝试通过视觉语言预训练来提高零点泛化的嵌入式推理能力,这种预训练可以将常识编码为一般先验知识。为了进一步提高其在特定任务上的性能,我们通过遮蔽场景图建模(MSGM)以自我监督的方式修正了预训练表示,其中特定任务的知识是从迭代信息传递中学来的。我们的方法可以大幅提高各种有代表性的具身推理任务的性能(例如,在 MP3D-EQA 数据集上,答案准确率超过 5.0%,该数据集由许多真实场景组成,测试过程中出现了大量新对象),并达到新的一流性能。
{"title":"To Boost Zero-Shot Generalization for Embodied Reasoning With Vision-Language Pre-Training","authors":"Ke Su;Xingxing Zhang;Siyang Zhang;Jun Zhu;Bo Zhang","doi":"10.1109/TIP.2024.3459800","DOIUrl":"10.1109/TIP.2024.3459800","url":null,"abstract":"Recently, there exists an increased research interest in embodied artificial intelligence (EAI), which involves an agent learning to perform a specific task when dynamically interacting with the surrounding 3D environment. There into, a new challenge is that many unseen objects may appear due to the increased number of object categories in 3D scenes. It makes developing models with strong zero-shot generalization ability to new objects necessary. Existing work tries to achieve this goal by providing embodied agents with massive high-quality human annotations closely related to the task to be learned, while it is too costly in practice. Inspired by recent advances in pre-trained models in 2D visual tasks, we attempt to boost zero-shot generalization for embodied reasoning with vision-language pre-training that can encode common sense as general prior knowledge. To further improve its performance on a specific task, we rectify the pre-trained representation through masked scene graph modeling (MSGM) in a self-supervised manner, where the task-specific knowledge is learned from iterative message passing. Our method can improve a variety of representative embodied reasoning tasks by a large margin (e.g., over 5.0% w.r.t. answer accuracy on MP3D-EQA dataset that consists of many real-world scenes with a large number of new objects during testing), and achieve the new state-of-the-art performance.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5370-5381"},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CWSCNet: Channel-Weighted Skip Connection Network for Underwater Object Detection CWSCNet:用于水下物体探测的通道加权跳转连接网络
Long Chen;Yunzhou Xie;Yaxin Li;Qi Xu;Junyu Dong
Autonomous underwater vehicles (AUVs) equipped with the intelligent underwater object detection technique is of great significance for underwater navigation. Advanced underwater object detection frameworks adopt skip connections to enhance the feature representation which further boosts the detection precision. However, we reveal two limitations of standard skip connections: 1) standard skip connections do not consider the feature heterogeneity, resulting in a sub-optimal feature fusion strategy; 2) feature redundancy exists in the skip connected features that not all the channels in the fused feature maps are equally important, the network learning should focus on the informative channels rather than the redundant ones. In this paper, we propose a novel channel-weighted skip connection network (CWSCNet) to learn multiple hyper fusion features for improving multi-scale underwater object detection. In CWSCNet, a novel feature fusion module, named channel-weighted skip connection (CWSC), is proposed to adaptively adjust the importance of different channels during feature fusion. The CWSC module removes feature heterogeneity that strengthens the compatibility of different feature maps, it also works as an effective feature selection strategy that enables CWSCNet to focus on learning channels with more object-related information. Extensive experiments on three underwater object detection datasets RUOD, URPC2017 and URPC2018 show that the proposed CWSCNet achieves comparable or state-of-the-art performances in underwater object detection.
配备智能水下物体探测技术的自主潜水器(AUV)对水下导航具有重要意义。先进的水下物体检测框架采用跳接来增强特征表示,从而进一步提高了检测精度。然而,我们发现标准跳越连接存在两个局限性:1)标准跳接没有考虑特征的异质性,导致特征融合策略不理想;2)跳接特征中存在特征冗余,融合后的特征图中并非所有通道都同等重要,网络学习应关注信息通道而非冗余通道。本文提出了一种新颖的通道加权跳接网络(CWSCNet)来学习多个超融合特征,以改进多尺度水下物体检测。在 CWSCNet 中,我们提出了一种名为信道加权跳接(CWSC)的新型特征融合模块,用于在特征融合过程中自适应地调整不同信道的重要性。CWSC 模块消除了特征异质性,加强了不同特征图的兼容性,同时也是一种有效的特征选择策略,使 CWSCNet 能够集中学习与物体相关信息更多的通道。在三个水下物体检测数据集 RUOD、URPC2017 和 URPC2018 上进行的广泛实验表明,所提出的 CWSCNet 在水下物体检测方面取得了相当或最先进的性能。
{"title":"CWSCNet: Channel-Weighted Skip Connection Network for Underwater Object Detection","authors":"Long Chen;Yunzhou Xie;Yaxin Li;Qi Xu;Junyu Dong","doi":"10.1109/TIP.2024.3457246","DOIUrl":"10.1109/TIP.2024.3457246","url":null,"abstract":"Autonomous underwater vehicles (AUVs) equipped with the intelligent underwater object detection technique is of great significance for underwater navigation. Advanced underwater object detection frameworks adopt skip connections to enhance the feature representation which further boosts the detection precision. However, we reveal two limitations of standard skip connections: 1) standard skip connections do not consider the feature heterogeneity, resulting in a sub-optimal feature fusion strategy; 2) feature redundancy exists in the skip connected features that not all the channels in the fused feature maps are equally important, the network learning should focus on the informative channels rather than the redundant ones. In this paper, we propose a novel channel-weighted skip connection network (CWSCNet) to learn multiple hyper fusion features for improving multi-scale underwater object detection. In CWSCNet, a novel feature fusion module, named channel-weighted skip connection (CWSC), is proposed to adaptively adjust the importance of different channels during feature fusion. The CWSC module removes feature heterogeneity that strengthens the compatibility of different feature maps, it also works as an effective feature selection strategy that enables CWSCNet to focus on learning channels with more object-related information. Extensive experiments on three underwater object detection datasets RUOD, URPC2017 and URPC2018 show that the proposed CWSCNet achieves comparable or state-of-the-art performances in underwater object detection.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5206-5218"},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1