首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
Image Super-Resolution via Efficient Transformer Embedding Frequency Decomposition With Restart 通过重启高效变压器嵌入频率分解实现图像超分辨率
Yifan Zuo;Wenhao Yao;Yuqi Hu;Yuming Fang;Wei Liu;Yuxin Peng
Recently, transformer-based backbones show superior performance over the convolutional counterparts in computer vision. Due to quadratic complexity with respect to the token number in global attention, local attention is always adopted in low-level image processing with linear complexity. However, the limited receptive field is harmful to the performance. In this paper, motivated by Octave convolution, we propose a transformer-based single image super-resolution (SISR) model, which explicitly embeds dynamic frequency decomposition into the standard local transformer. All the frequency components are continuously updated and re-assigned via intra-scale attention and inter-scale interaction, respectively. Specifically, the attention in low resolution is enough for low-frequency features, which not only increases the receptive field, but also decreases the complexity. Compared with the standard local transformer, the proposed FDRTran layer simultaneously decreases FLOPs and parameters. By contrast, Octave convolution only decreases FLOPs of the standard convolution, but keeps the parameter number unchanged. In addition, the restart mechanism is proposed for every a few frequency updates, which first fuses the low and high frequency, then decomposes the features again. In this way, the features can be decomposed in multiple viewpoints by learnable parameters, which avoids the risk of early saturation for frequency representation. Furthermore, based on the FDRTran layer with restart mechanism, the proposed FDRNet is the first transformer backbone for SISR which discusses the Octave design. Sufficient experiments show our model reaches state-of-the-art performance on 6 synthetic and real datasets. The code and the models are available at https://github.com/catnip1029/FDRNet.
最近,在计算机视觉领域,基于变压器的骨干网显示出比卷积骨干网更优越的性能。由于全局注意力的复杂度与标记数呈二次方关系,因此在具有线性复杂度的低级图像处理中,总是采用局部注意力。然而,有限的感受野会对性能造成损害。本文受八度卷积的启发,提出了一种基于变压器的单图像超分辨率(SISR)模型,它明确地将动态频率分解嵌入到标准局部变压器中。所有频率分量分别通过尺度内注意力和尺度间交互作用不断更新和重新分配。具体来说,低分辨率下的注意力足以满足低频特征的需要,这不仅增加了感受野,还降低了复杂度。与标准局部变换器相比,拟议的 FDRTran 层同时降低了 FLOP 和参数。相比之下,八度卷积只减少了标准卷积的 FLOPs,但保持了参数数量不变。此外,我们还提出了每更新几个频率就重启一次的机制,即先融合低频和高频,然后再分解特征。这样,就可以通过可学习的参数对特征进行多视角分解,从而避免频率表示过早饱和的风险。此外,基于具有重启机制的 FDRTran 层,所提出的 FDRNet 是首个讨论 Octave 设计的 SISR 变压器骨干网。充分的实验表明,我们的模型在 6 个合成和真实数据集上达到了最先进的性能。代码和模型可在 https://github.com/catnip1029/FDRNet 上获取。
{"title":"Image Super-Resolution via Efficient Transformer Embedding Frequency Decomposition With Restart","authors":"Yifan Zuo;Wenhao Yao;Yuqi Hu;Yuming Fang;Wei Liu;Yuxin Peng","doi":"10.1109/TIP.2024.3444317","DOIUrl":"10.1109/TIP.2024.3444317","url":null,"abstract":"Recently, transformer-based backbones show superior performance over the convolutional counterparts in computer vision. Due to quadratic complexity with respect to the token number in global attention, local attention is always adopted in low-level image processing with linear complexity. However, the limited receptive field is harmful to the performance. In this paper, motivated by Octave convolution, we propose a transformer-based single image super-resolution (SISR) model, which explicitly embeds dynamic frequency decomposition into the standard local transformer. All the frequency components are continuously updated and re-assigned via intra-scale attention and inter-scale interaction, respectively. Specifically, the attention in low resolution is enough for low-frequency features, which not only increases the receptive field, but also decreases the complexity. Compared with the standard local transformer, the proposed FDRTran layer simultaneously decreases FLOPs and parameters. By contrast, Octave convolution only decreases FLOPs of the standard convolution, but keeps the parameter number unchanged. In addition, the restart mechanism is proposed for every a few frequency updates, which first fuses the low and high frequency, then decomposes the features again. In this way, the features can be decomposed in multiple viewpoints by learnable parameters, which avoids the risk of early saturation for frequency representation. Furthermore, based on the FDRTran layer with restart mechanism, the proposed FDRNet is the first transformer backbone for SISR which discusses the Octave design. Sufficient experiments show our model reaches state-of-the-art performance on 6 synthetic and real datasets. The code and the models are available at \u0000<uri>https://github.com/catnip1029/FDRNet</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142019915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Prototype Learning for Weakly-supervised Temporal Action Localization. 弱监督时空动作定位的自适应原型学习
Wang Luo, Huan Ren, Tianzhu Zhangd, Wenfei Yang, Yongdong Zhang

Weakly-supervised Temporal Action Localization (WTAL) aims to localize action instances with only video-level labels during training, where two primary issues are localization incompleteness and background interference. To relieve these two issues, recent methods adopt an attention mechanism to activate action instances and simultaneously suppress background ones, which have achieved remarkable progress. Nevertheless, we argue that these two issues have not been well resolved yet. On the one hand, the attention mechanism adopts fixed weights for different videos, which are incapable of handling the diversity of different videos, thus deficient in addressing the problem of localization incompleteness. On the other hand, previous methods only focus on learning the foreground attention and the attention weights usually suffer from ambiguity, resulting in difficulty of suppressing background interference. To deal with the above issues, in this paper we propose an Adaptive Prototype Learning (APL) method for WTAL, which includes two key designs: (1) an Adaptive Transformer Network (ATN) to explicitly model background and learn video-adaptive prototypes for each specific video, (2) an OT-based Collaborative (OTC) training strategy to guide the learning of prototypes and remove the ambiguity of the foreground-background separation by introducing an Optimal Transport (OT) algorithm into the collaborative training scheme between RGB and FLOW streams. These two key designs can work together to learn video-adaptive prototypes and solve the above two issues, achieving robust localization. Extensive experimental results on two standard benchmarks (THUMOS14 and ActivityNet) demonstrate that our proposed APL performs favorably against state-of-the-art methods.

弱监督时空动作定位(WTAL)的目的是在训练过程中仅使用视频级标签定位动作实例,其中两个主要问题是定位不完整和背景干扰。为了缓解这两个问题,最近的方法采用了一种注意力机制来激活动作实例,同时抑制背景实例,并取得了显著进展。然而,我们认为这两个问题还没有得到很好的解决。一方面,注意力机制对不同视频采用固定权重,无法处理不同视频的多样性,因此在解决定位不完整问题上存在缺陷。另一方面,以往的方法只关注前景注意力的学习,注意力权重通常存在模糊性,导致难以抑制背景干扰。针对上述问题,本文提出了一种适用于 WTAL 的自适应原型学习(APL)方法,其中包括两个关键设计:(1)自适应变换器网络(ATN),用于对背景进行显式建模,并针对每个特定视频学习视频自适应原型;(2)基于 OT 的协同(OTC)训练策略,通过在 RGB 和 FLOW 流之间的协同训练方案中引入最优传输(OT)算法,指导原型学习并消除前景-背景分离的模糊性。这两项关键设计可以共同学习视频自适应原型,解决上述两个问题,从而实现稳健的定位。在两个标准基准(THUMOS14 和 ActivityNet)上进行的大量实验结果表明,我们提出的 APL 与最先进的方法相比表现出色。
{"title":"Adaptive Prototype Learning for Weakly-supervised Temporal Action Localization.","authors":"Wang Luo, Huan Ren, Tianzhu Zhangd, Wenfei Yang, Yongdong Zhang","doi":"10.1109/TIP.2024.3431915","DOIUrl":"10.1109/TIP.2024.3431915","url":null,"abstract":"<p><p>Weakly-supervised Temporal Action Localization (WTAL) aims to localize action instances with only video-level labels during training, where two primary issues are localization incompleteness and background interference. To relieve these two issues, recent methods adopt an attention mechanism to activate action instances and simultaneously suppress background ones, which have achieved remarkable progress. Nevertheless, we argue that these two issues have not been well resolved yet. On the one hand, the attention mechanism adopts fixed weights for different videos, which are incapable of handling the diversity of different videos, thus deficient in addressing the problem of localization incompleteness. On the other hand, previous methods only focus on learning the foreground attention and the attention weights usually suffer from ambiguity, resulting in difficulty of suppressing background interference. To deal with the above issues, in this paper we propose an Adaptive Prototype Learning (APL) method for WTAL, which includes two key designs: (1) an Adaptive Transformer Network (ATN) to explicitly model background and learn video-adaptive prototypes for each specific video, (2) an OT-based Collaborative (OTC) training strategy to guide the learning of prototypes and remove the ambiguity of the foreground-background separation by introducing an Optimal Transport (OT) algorithm into the collaborative training scheme between RGB and FLOW streams. These two key designs can work together to learn video-adaptive prototypes and solve the above two issues, achieving robust localization. Extensive experimental results on two standard benchmarks (THUMOS14 and ActivityNet) demonstrate that our proposed APL performs favorably against state-of-the-art methods.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low Overlapping Point Cloud Registration Using Mutual Prior Based Completion Network 使用基于相互先验的完成网络实现低重叠点云注册
Yazhou Liu;Zhiyong Liu
This work presents a new completion method that specifically designed for low-overlapping partial point cloud registration. Based on the assumption that the candidate partial point clouds to be registered belong to the same target, the proposed mutual prior based completion (MPC) method uses these candidate partial point clouds as completion reference for each other to extend their overlapping regions. Without relying on shape prior knowledge, MPC can work for different types of point clouds, such as object, room scene, and street view. The main challenge of this mutual reference approach is that partial clouds without spatial alignment cannot provide a reliable completion reference. Based on the mutual information maximization, a progressive completion structure is developed to achieve pose, feature representation and completion alignment between input point clouds. Experiments on public datasets show encouraging results. Especially for the low-overlapping cases, compared with the state-of-the-art (SOTA) models, the size of overlapping regions can be increased by about 15.0%, and the rotation and translation error can be reduced by 30.8% and 57.7% respectively.
这项研究提出了一种新的完成方法,专门用于低重叠部分点云注册。基于待注册的候选局部点云属于同一目标的假设,所提出的基于互先验的完成(MPC)方法将这些候选局部点云作为彼此的完成参考,以扩展它们的重叠区域。在不依赖形状先验知识的情况下,MPC 可用于不同类型的点云,如物体、房间场景和街景。这种相互参照方法的主要挑战在于,没有空间对齐的局部云无法提供可靠的完成参照。在互信息最大化的基础上,开发了一种渐进式完成结构,以实现输入点云之间的姿态、特征表示和完成对齐。在公共数据集上的实验结果令人鼓舞。特别是在低重叠情况下,与最先进的(SOTA)模型相比,重叠区域的大小可增加约 15.0%,旋转和平移误差可分别减少 30.8% 和 57.7%。(代码见:https://*.*)。
{"title":"Low Overlapping Point Cloud Registration Using Mutual Prior Based Completion Network","authors":"Yazhou Liu;Zhiyong Liu","doi":"10.1109/TIP.2024.3437234","DOIUrl":"10.1109/TIP.2024.3437234","url":null,"abstract":"This work presents a new completion method that specifically designed for low-overlapping partial point cloud registration. Based on the assumption that the candidate partial point clouds to be registered belong to the same target, the proposed mutual prior based completion (MPC) method uses these candidate partial point clouds as completion reference for each other to extend their overlapping regions. Without relying on shape prior knowledge, MPC can work for different types of point clouds, such as object, room scene, and street view. The main challenge of this mutual reference approach is that partial clouds without spatial alignment cannot provide a reliable completion reference. Based on the mutual information maximization, a progressive completion structure is developed to achieve pose, feature representation and completion alignment between input point clouds. Experiments on public datasets show encouraging results. Especially for the low-overlapping cases, compared with the state-of-the-art (SOTA) models, the size of overlapping regions can be increased by about 15.0%, and the rotation and translation error can be reduced by 30.8% and 57.7% respectively.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear Combinations of Patches are Unreasonably Effective for Single-Image Denoising 斑块的线性组合对单图像去噪效果不佳
Sébastien Herbreteau;Charles Kervrann
In the past decade, deep neural networks have revolutionized image denoising in achieving significant accuracy improvements by learning on datasets composed of noisy/clean image pairs. However, this strategy is extremely dependent on training data quality, which is a well-established weakness. To alleviate the requirement to learn image priors externally, single-image (a.k.a., self-supervised or zero-shot) methods perform denoising solely based on the analysis of the input noisy image without external dictionary or training dataset. This work investigates the effectiveness of linear combinations of patches for denoising under this constraint. Although conceptually very simple, we show that linear combinations of patches are enough to achieve state-of-the-art performance. The proposed parametric approach relies on quadratic risk approximation via multiple pilot images to guide the estimation of the combination weights. Experiments on images corrupted artificially with Gaussian noise as well as on real-world noisy images demonstrate that our method is on par with the very best single-image denoisers, outperforming the recent neural network-based techniques, while being much faster and fully interpretable.
在过去的十年中,深度神经网络带来了一场图像去噪的革命,它通过在由噪声/清洁图像对组成的数据集上学习,显著提高了图像去噪的准确性。然而,这种策略极其依赖于训练数据的质量,这是一个公认的弱点。为了减少从外部学习图像先验的要求,单图像(又称自监督或零镜头)方法仅根据对输入噪声图像的分析来执行去噪,而无需外部字典或训练数据集。这项工作研究了在这种约束条件下,线性组合补丁去噪的有效性。虽然概念上非常简单,但我们表明,线性斑块组合足以实现最先进的性能。所提出的参数方法依赖于通过多个先导图像进行二次风险逼近,以指导组合权重的估算。在使用高斯噪声人为破坏的图像以及真实世界的噪声图像上进行的实验表明,我们的方法与最好的单图像去噪器不相上下,优于最近基于神经网络的技术,同时速度更快,完全可解释。
{"title":"Linear Combinations of Patches are Unreasonably Effective for Single-Image Denoising","authors":"Sébastien Herbreteau;Charles Kervrann","doi":"10.1109/TIP.2024.3436651","DOIUrl":"10.1109/TIP.2024.3436651","url":null,"abstract":"In the past decade, deep neural networks have revolutionized image denoising in achieving significant accuracy improvements by learning on datasets composed of noisy/clean image pairs. However, this strategy is extremely dependent on training data quality, which is a well-established weakness. To alleviate the requirement to learn image priors externally, single-image (a.k.a., self-supervised or zero-shot) methods perform denoising solely based on the analysis of the input noisy image without external dictionary or training dataset. This work investigates the effectiveness of linear combinations of patches for denoising under this constraint. Although conceptually very simple, we show that linear combinations of patches are enough to achieve state-of-the-art performance. The proposed parametric approach relies on quadratic risk approximation via multiple pilot images to guide the estimation of the combination weights. Experiments on images corrupted artificially with Gaussian noise as well as on real-world noisy images demonstrate that our method is on par with the very best single-image denoisers, outperforming the recent neural network-based techniques, while being much faster and fully interpretable.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Trustworthy Counterfactual Explanation Method With Latent Space Smoothing 采用潜空间平滑法的可信反事实解释方法
Yan Li;Xia Cai;Chunwei Wu;Xiao Lin;Guitao Cao
Despite the large-scale adoption of Artificial Intelligence (AI) models in healthcare, there is an urgent need for trustworthy tools to rigorously backtrack the model decisions so that they behave reliably. Counterfactual explanations take a counter-intuitive approach to allow users to explore “what if” scenarios gradually becoming popular in the trustworthy field. However, most previous work on model’s counterfactual explanation cannot generate in-distribution attribution credibly, produces adversarial examples, or fails to give a confidence interval for the explanation. Hence, in this paper, we propose a novel approach that generates counterfactuals in locally smooth directed semantic embedding space, and at the same time gives an uncertainty estimate in the counterfactual generation process. Specifically, we identify low-dimensional directed semantic embedding space based on Principal Component Analysis (PCA) applied in differential generative model. Then, we propose latent space smoothing regularization to rectify counterfactual search within in-distribution, such that visually-imperceptible changes are more robust to adversarial perturbations. Moreover, we put forth an uncertainty estimation framework for evaluating counterfactual uncertainty. Extensive experiments on several challenging realistic Chest X-ray and CelebA datasets show that our approach performs consistently well and better than the existing several state-of-the-art baseline approaches.
尽管人工智能(AI)模型在医疗保健领域得到了大规模应用,但仍迫切需要可信的工具来严格回溯模型的决策,使其行为可靠。反事实解释采用反直觉的方法,允许用户探索 "如果 "场景,逐渐在可信领域流行起来。然而,以往大多数关于模型反事实解释的工作都无法可信地生成分布内归因,产生对抗性示例,或无法给出解释的置信区间。因此,在本文中,我们提出了一种在局部平滑的有向语义嵌入空间中生成反事实的新方法,同时给出了反事实生成过程中的不确定性估计。具体来说,我们根据应用于差分生成模型的主成分分析法(PCA)确定低维有向语义嵌入空间。然后,我们提出了潜在空间平滑正则化方法,以纠正分布内的反事实搜索,从而使视觉上可感知的变化对对抗性扰动更加稳健。此外,我们还提出了一个用于评估反事实不确定性的不确定性估计框架。在几个具有挑战性的现实胸部 X 射线和 CelebA 数据集上进行的广泛实验表明,我们的方法始终表现良好,优于现有的几种最先进的基线方法。
{"title":"A Trustworthy Counterfactual Explanation Method With Latent Space Smoothing","authors":"Yan Li;Xia Cai;Chunwei Wu;Xiao Lin;Guitao Cao","doi":"10.1109/TIP.2024.3442614","DOIUrl":"10.1109/TIP.2024.3442614","url":null,"abstract":"Despite the large-scale adoption of Artificial Intelligence (AI) models in healthcare, there is an urgent need for trustworthy tools to rigorously backtrack the model decisions so that they behave reliably. Counterfactual explanations take a counter-intuitive approach to allow users to explore “what if” scenarios gradually becoming popular in the trustworthy field. However, most previous work on model’s counterfactual explanation cannot generate in-distribution attribution credibly, produces adversarial examples, or fails to give a confidence interval for the explanation. Hence, in this paper, we propose a novel approach that generates counterfactuals in locally smooth directed semantic embedding space, and at the same time gives an uncertainty estimate in the counterfactual generation process. Specifically, we identify low-dimensional directed semantic embedding space based on Principal Component Analysis (PCA) applied in differential generative model. Then, we propose latent space smoothing regularization to rectify counterfactual search within in-distribution, such that visually-imperceptible changes are more robust to adversarial perturbations. Moreover, we put forth an uncertainty estimation framework for evaluating counterfactual uncertainty. Extensive experiments on several challenging realistic Chest X-ray and CelebA datasets show that our approach performs consistently well and better than the existing several state-of-the-art baseline approaches.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Cross-View Reconstruction GAN Based on Correlated Subspace for Multi-View Transformation 基于多视角变换相关子空间的深度跨视角重构 GAN
Jian-Xun Mi;Junchang He;Weisheng Li
In scenarios where identifying face information in the visible spectrum (VIS) is challenging due to poor lighting conditions, the use of near-infrared (NIR) and thermal (TH) cameras can provide viable alternatives. However, the unique data distribution of images captured by these cameras compared to VIS images presents challenges in matching face identities. To address these challenges, we propose a novel image transformation framework. The framework includes feature extraction from the input image, followed by a transformation network that generates target domain images with perceptual fidelity. Additionally, a reconstruction network preserves original information by reconstructing the original domain image from the extracted features. By considering the correlation between features from both domains, our framework utilizes paired data obtained from the same individual. We apply this framework to two well-established image-to-image transformation models, pix2pix and CycleGAN, known as CRC-pix2pix and CRC-CycleGAN respectively. The versatility of our approach allows extension to other models based on pix2pix or CycleGAN architectures. Our models generate high-quality images while preserving the identity information of the original face. Performance evaluation on TFW and BUAA NIR-VIS datasets demonstrates the superiority of our models in terms of generated image face matching and evaluation metrics such as SSIM, MSE, PSNR, and LPIPS. Moreover, we introduce the CQUPT-VIS-TH dataset, which enriches the paired dataset with thermal-visual face data capturing various angles and expressions.
由于光照条件差,在可见光谱(VIS)下识别人脸信息具有挑战性,在这种情况下,使用近红外(NIR)和热像仪(TH)可以提供可行的替代方案。然而,与可见光谱图像相比,这些相机捕捉到的图像数据分布独特,这给人脸身份匹配带来了挑战。为了应对这些挑战,我们提出了一个新颖的图像转换框架。该框架包括从输入图像中提取特征,然后通过转换网络生成具有感知保真度的目标域图像。此外,重建网络通过从提取的特征中重建原始域图像来保留原始信息。通过考虑两个域的特征之间的相关性,我们的框架利用了从同一个人身上获得的配对数据。我们将这一框架应用于两个成熟的图像到图像转换模型,即 pix2pix 和 CycleGAN,分别称为 CRC-pix2pix 和 CRC-CycleGAN。我们的方法用途广泛,可以扩展到基于 pix2pix 或 CycleGAN 架构的其他模型。我们的模型能生成高质量的图像,同时保留原始人脸的身份信息。在 TFW 和 BUAA NIR-VIS 数据集上进行的性能评估表明,我们的模型在生成图像的人脸匹配以及 SSIM、MSE、PSNR 和 LPIPS 等评估指标方面具有优势。此外,我们还引入了 CQUPT-VIS-TH 数据集,该数据集通过捕捉不同角度和表情的热视觉人脸数据丰富了配对数据集。
{"title":"Deep Cross-View Reconstruction GAN Based on Correlated Subspace for Multi-View Transformation","authors":"Jian-Xun Mi;Junchang He;Weisheng Li","doi":"10.1109/TIP.2024.3442610","DOIUrl":"10.1109/TIP.2024.3442610","url":null,"abstract":"In scenarios where identifying face information in the visible spectrum (VIS) is challenging due to poor lighting conditions, the use of near-infrared (NIR) and thermal (TH) cameras can provide viable alternatives. However, the unique data distribution of images captured by these cameras compared to VIS images presents challenges in matching face identities. To address these challenges, we propose a novel image transformation framework. The framework includes feature extraction from the input image, followed by a transformation network that generates target domain images with perceptual fidelity. Additionally, a reconstruction network preserves original information by reconstructing the original domain image from the extracted features. By considering the correlation between features from both domains, our framework utilizes paired data obtained from the same individual. We apply this framework to two well-established image-to-image transformation models, pix2pix and CycleGAN, known as CRC-pix2pix and CRC-CycleGAN respectively. The versatility of our approach allows extension to other models based on pix2pix or CycleGAN architectures. Our models generate high-quality images while preserving the identity information of the original face. Performance evaluation on TFW and BUAA NIR-VIS datasets demonstrates the superiority of our models in terms of generated image face matching and evaluation metrics such as SSIM, MSE, PSNR, and LPIPS. Moreover, we introduce the CQUPT-VIS-TH dataset, which enriches the paired dataset with thermal-visual face data capturing various angles and expressions.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Prompt Learning Implicit Degradation Estimation Network for Blind Super Resolution 用于盲超解像的轻量级提示学习隐式退化估计网络
Asif Hussain Khan;Christian Micheloni;Niki Martinel
Blind image super-resolution (SR) aims to recover a high-resolution (HR) image from its low-resolution (LR) counterpart under the assumption of unknown degradations. Many existing blind SR methods rely on supervising ground-truth kernels referred to as explicit degradation estimators. However, it is very challenging to obtain the ground-truths for different degradations kernels. Moreover, most of these methods rely on heavy backbone networks, which demand extensive computational resources. Implicit degradation estimators do not require the availability of ground truth kernels, but they see a significant performance gap with the explicit degradation estimators due to such missing information. We present a novel approach that significantly narrows such a gap by means of a lightweight architecture that implicitly learns the degradation kernel with the help of a novel loss component. The kernel is exploited by a learnable Wiener filter that performs efficient deconvolution in the Fourier domain by deriving a closed-form solution. Inspired by prompt-based learning, we also propose a novel degradation-conditioned prompt layer that exploits the estimated kernel to drive the focus on the discriminative contextual information that guides the reconstruction process in recovering the latent HR image. Extensive experiments under different degradation settings demonstrate that our model, named PL-IDENet, yields PSNR and SSIM improvements of more than $0.4dB$ and 1.3%, and $1.4dB$ and 4.8% to the best implicit and explicit blind-SR method, respectively. These results are achieved while maintaining a substantially lower number of parameters/FLOPs (i.e., 25% and 68% fewer parameters than best implicit and explicit methods, respectively).
盲图像超分辨率(SR)旨在从低分辨率(LR)对应图像中恢复出高分辨率(HR)图像,前提是退化情况未知。许多现有的盲超解像方法都依赖于被称为显式退化估计器的监督地面实况核。然而,要获得不同退化内核的地面实况非常具有挑战性。此外,这些方法大多依赖于庞大的骨干网络,需要大量的计算资源。隐式降解估计器不需要获得地面实况内核,但由于此类信息缺失,它们与显式降解估计器的性能差距很大。我们提出了一种新颖的方法,通过一种轻量级架构,借助新颖的损失组件隐式地学习退化内核,从而大大缩小了这种差距。该内核由可学习的维纳滤波器利用,通过推导闭式解在傅里叶域执行高效的解卷积。受基于提示的学习的启发,我们还提出了一个新颖的降解条件提示层,利用估计的内核来驱动对鉴别性上下文信息的关注,从而在恢复潜在 HR 图像的过程中指导重建过程。在不同降解设置下进行的大量实验表明,我们的模型(命名为 PL-IDENet)的 PSNR 和 SSIM 分别比最佳隐式和显式盲 SR 方法提高了 0.4dB 和 1.3% 以及 1.4dB 和 4.8%。在取得这些结果的同时,还大大降低了参数/FLOP 数量(即分别比最佳隐式和显式方法少 25% 和 68%)。
{"title":"Lightweight Prompt Learning Implicit Degradation Estimation Network for Blind Super Resolution","authors":"Asif Hussain Khan;Christian Micheloni;Niki Martinel","doi":"10.1109/TIP.2024.3442613","DOIUrl":"10.1109/TIP.2024.3442613","url":null,"abstract":"Blind image super-resolution (SR) aims to recover a high-resolution (HR) image from its low-resolution (LR) counterpart under the assumption of unknown degradations. Many existing blind SR methods rely on supervising ground-truth kernels referred to as explicit degradation estimators. However, it is very challenging to obtain the ground-truths for different degradations kernels. Moreover, most of these methods rely on heavy backbone networks, which demand extensive computational resources. Implicit degradation estimators do not require the availability of ground truth kernels, but they see a significant performance gap with the explicit degradation estimators due to such missing information. We present a novel approach that significantly narrows such a gap by means of a lightweight architecture that implicitly learns the degradation kernel with the help of a novel loss component. The kernel is exploited by a learnable Wiener filter that performs efficient deconvolution in the Fourier domain by deriving a closed-form solution. Inspired by prompt-based learning, we also propose a novel degradation-conditioned prompt layer that exploits the estimated kernel to drive the focus on the discriminative contextual information that guides the reconstruction process in recovering the latent HR image. Extensive experiments under different degradation settings demonstrate that our model, named PL-IDENet, yields PSNR and SSIM improvements of more than \u0000<inline-formula> <tex-math>$0.4dB$ </tex-math></inline-formula>\u0000 and 1.3%, and \u0000<inline-formula> <tex-math>$1.4dB$ </tex-math></inline-formula>\u0000 and 4.8% to the best implicit and explicit blind-SR method, respectively. These results are achieved while maintaining a substantially lower number of parameters/FLOPs (i.e., 25% and 68% fewer parameters than best implicit and explicit methods, respectively).","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimalist and High-Quality Panoramic Imaging With PSF-Aware Transformers 利用 PSF 感知变压器实现极简和高质量全景成像
Qi Jiang;Shaohua Gao;Yao Gao;Kailun Yang;Zhonghua Yi;Hao Shi;Lei Sun;Kaiwei Wang
High-quality panoramic images with a Field of View (FoV) of 360° are essential for contemporary panoramic computer vision tasks. However, conventional imaging systems come with sophisticated lens designs and heavy optical components. This disqualifies their usage in many mobile and wearable applications where thin and portable, minimalist imaging systems are desired. In this paper, we propose a Panoramic Computational Imaging Engine (PCIE) to achieve minimalist and high-quality panoramic imaging. With less than three spherical lenses, a Minimalist Panoramic Imaging Prototype (MPIP) is constructed based on the design of the Panoramic Annular Lens (PAL), but with low-quality imaging results due to aberrations and small image plane size. We propose two pipelines, i.e. Aberration Correction (AC) and Super-Resolution and Aberration Correction (SR&AC), to solve the image quality problems of MPIP, with imaging sensors of small and large pixel size, respectively. To leverage the prior information of the optical system, we propose a Point Spread Function (PSF) representation method to produce a PSF map as an additional modality. A PSF-aware Aberration-image Recovery Transformer (PART) is designed as a universal network for the two pipelines, in which the self-attention calculation and feature extraction are guided by the PSF map. We train PART on synthetic image pairs from simulation and put forward the PALHQ dataset to fill the gap of real-world high-quality PAL images for low-level vision. A comprehensive variety of experiments on synthetic and real-world benchmarks demonstrates the impressive imaging results of PCIE and the effectiveness of the PSF representation. We further deliver heuristic experimental findings for minimalist and high-quality panoramic imaging, in terms of the choices of prototype and pipeline, network architecture, training strategies, and dataset construction. Our dataset and code will be available at https://github.com/zju-jiangqi/PCIE-PART.
视场角 (FoV) 为 360° 的高质量全景图像对于当代全景计算机视觉任务至关重要。然而,传统的成像系统需要复杂的镜头设计和笨重的光学元件。在许多移动和可穿戴应用中,人们需要轻薄便携的简约型成像系统,但传统的成像系统却无法满足这些需求。在本文中,我们提出了一种全景计算成像引擎(PCIE),以实现极简和高质量的全景成像。基于全景环形透镜(PAL)的设计,在不到三个球面透镜的情况下,构建了极简全景成像原型(MPIP),但由于像差和图像平面尺寸小,成像质量不高。我们提出了两个管道,即像差校正(AC)和超分辨率与像差校正(SR&AC),以解决 MPIP 在成像传感器像素尺寸较小和较大的情况下的成像质量问题。为了充分利用光学系统的先验信息,我们提出了一种点展宽函数(PSF)表示方法,以生成 PSF 图作为附加模式。我们设计了一个感知 PSF 的像差-图像复原转换器(PART),作为两个管道的通用网络,其中的自注意计算和特征提取均由 PSF 图指导。我们在模拟合成图像对上训练 PART,并提出了 PALHQ 数据集,以填补真实世界高质量 PAL 图像在低级视觉领域的空白。在合成和真实世界基准上进行的各种实验证明了 PCIE 令人印象深刻的成像效果和 PSF 表示法的有效性。在原型和流水线的选择、网络架构、训练策略和数据集构建方面,我们进一步提供了简约和高质量全景成像的启发式实验结果。我们的数据集和代码将发布在 https://github.com/zju-jiangqi/PCIE-PART 网站上。
{"title":"Minimalist and High-Quality Panoramic Imaging With PSF-Aware Transformers","authors":"Qi Jiang;Shaohua Gao;Yao Gao;Kailun Yang;Zhonghua Yi;Hao Shi;Lei Sun;Kaiwei Wang","doi":"10.1109/TIP.2024.3441370","DOIUrl":"10.1109/TIP.2024.3441370","url":null,"abstract":"High-quality panoramic images with a Field of View (FoV) of 360° are essential for contemporary panoramic computer vision tasks. However, conventional imaging systems come with sophisticated lens designs and heavy optical components. This disqualifies their usage in many mobile and wearable applications where thin and portable, minimalist imaging systems are desired. In this paper, we propose a Panoramic Computational Imaging Engine (PCIE) to achieve minimalist and high-quality panoramic imaging. With less than three spherical lenses, a Minimalist Panoramic Imaging Prototype (MPIP) is constructed based on the design of the Panoramic Annular Lens (PAL), but with low-quality imaging results due to aberrations and small image plane size. We propose two pipelines, i.e. Aberration Correction (AC) and Super-Resolution and Aberration Correction (SR&AC), to solve the image quality problems of MPIP, with imaging sensors of small and large pixel size, respectively. To leverage the prior information of the optical system, we propose a Point Spread Function (PSF) representation method to produce a PSF map as an additional modality. A PSF-aware Aberration-image Recovery Transformer (PART) is designed as a universal network for the two pipelines, in which the self-attention calculation and feature extraction are guided by the PSF map. We train PART on synthetic image pairs from simulation and put forward the PALHQ dataset to fill the gap of real-world high-quality PAL images for low-level vision. A comprehensive variety of experiments on synthetic and real-world benchmarks demonstrates the impressive imaging results of PCIE and the effectiveness of the PSF representation. We further deliver heuristic experimental findings for minimalist and high-quality panoramic imaging, in terms of the choices of prototype and pipeline, network architecture, training strategies, and dataset construction. Our dataset and code will be available at \u0000<uri>https://github.com/zju-jiangqi/PCIE-PART</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141992546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Granularity Part Sampling Attention for Fine-Grained Visual Classification 细粒度视觉分类的多粒度部分采样注意力
Jiahui Wang;Qin Xu;Bo Jiang;Bin Luo;Jinhui Tang
Fine-grained visual classification aims to classify similar sub-categories with the challenges of large variations within the same sub-category and high visual similarities between different sub-categories. Recently, methods that extract semantic parts of the discriminative regions have attracted increasing attention. However, most existing methods extract the part features via rectangular bounding boxes by object detection module or attention mechanism, which makes it difficult to capture the rich shape information of objects. In this paper, we propose a novel Multi-Granularity Part Sampling Attention (MPSA) network for fine-grained visual classification. First, a novel multi-granularity part retrospect block is designed to extract the part information of different scales and enhance the high-level feature representation with discriminative part features of different granularities. Then, to extract part features of various shapes at each granularity, we propose part sampling attention, which can sample the implicit semantic parts on the feature maps comprehensively. The proposed part sampling attention not only considers the importance of sampled parts but also adopts the part dropout to reduce the overfitting issue. In addition, we propose a novel multi-granularity fusion method to highlight the foreground features and suppress the background noises with the assistance of the gradient class activation map. Experimental results demonstrate that the proposed MPSA achieves state-of-the-art performance on four commonly used fine-grained visual classification benchmarks. The source code is publicly available at https://github.com/mobulan/MPSA.
细粒度视觉分类的目的是对相似的子类别进行分类,但面临的挑战是同一子类别内的差异较大,而不同子类别之间的视觉相似度较高。最近,提取分辨区域语义部分的方法越来越受到关注。然而,现有的大多数方法都是通过物体检测模块或注意力机制,通过矩形边界框提取部分特征,很难捕捉到物体丰富的形状信息。在本文中,我们提出了一种用于细粒度视觉分类的新型多粒度部分采样注意力(MPSA)网络。首先,我们设计了一个新颖的多粒度零件回溯块,以提取不同尺度的零件信息,并利用不同粒度的零件特征增强高级特征表示。然后,为了提取每个粒度上各种形状的零件特征,我们提出了零件采样注意,它可以对特征图上的隐含语义零件进行全面采样。所提出的部件采样注意不仅考虑了采样部件的重要性,还采用了部件剔除来减少过拟合问题。此外,我们还提出了一种新颖的多粒度融合方法,借助梯度类激活图突出前景特征,抑制背景噪音。实验结果表明,所提出的 MPSA 在四个常用的细粒度视觉分类基准上取得了最先进的性能。源代码可通过 https://github.com/mobulan/MPSA 公开获取。
{"title":"Multi-Granularity Part Sampling Attention for Fine-Grained Visual Classification","authors":"Jiahui Wang;Qin Xu;Bo Jiang;Bin Luo;Jinhui Tang","doi":"10.1109/TIP.2024.3441813","DOIUrl":"10.1109/TIP.2024.3441813","url":null,"abstract":"Fine-grained visual classification aims to classify similar sub-categories with the challenges of large variations within the same sub-category and high visual similarities between different sub-categories. Recently, methods that extract semantic parts of the discriminative regions have attracted increasing attention. However, most existing methods extract the part features via rectangular bounding boxes by object detection module or attention mechanism, which makes it difficult to capture the rich shape information of objects. In this paper, we propose a novel Multi-Granularity Part Sampling Attention (MPSA) network for fine-grained visual classification. First, a novel multi-granularity part retrospect block is designed to extract the part information of different scales and enhance the high-level feature representation with discriminative part features of different granularities. Then, to extract part features of various shapes at each granularity, we propose part sampling attention, which can sample the implicit semantic parts on the feature maps comprehensively. The proposed part sampling attention not only considers the importance of sampled parts but also adopts the part dropout to reduce the overfitting issue. In addition, we propose a novel multi-granularity fusion method to highlight the foreground features and suppress the background noises with the assistance of the gradient class activation map. Experimental results demonstrate that the proposed MPSA achieves state-of-the-art performance on four commonly used fine-grained visual classification benchmarks. The source code is publicly available at \u0000<uri>https://github.com/mobulan/MPSA</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141992474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Semantic Segmentation for Light Field Images Using Disparity Information 利用差异信息对光场图像进行半监督语义分割
Shansi Zhang;Yaping Zhao;Edmund Y. Lam
Light field (LF) images enable numerous applications due to their ability to capture information for multiple views. Semantic segmentation is an essential task for LF scene understanding. However, existing supervised methods heavily rely on a large number of pixel-wise annotations. To relieve this problem, we propose a semi-supervised LF semantic segmentation method that requires only a small subset of labeled data and harnesses the LF disparity information. First, we design an unsupervised disparity estimation network, which can determine the disparity map for every view. With the estimated disparity maps, we generate pseudo-labels along with their weight maps for the peripheral views when only the labels of central views are available. We then merge the predictions from multiple views to obtain more reliable pseudo-labels for unlabeled data, and introduce a disparity-semantics consistency loss to enforce structure similarity. Moreover, we develop a comprehensive contrastive learning scheme that includes a pixel-level strategy to enhance feature representations and an object-level strategy to improve segmentation for individual objects. Our method demonstrates state-of-the-art performance on the benchmark LF semantic segmentation dataset under a variety of training settings and achieves comparable performance to supervised methods when trained under 1/2 protocol.
光场(LF)图像能够捕捉多个视角的信息,因此应用广泛。语义分割是理解光场场景的一项基本任务。然而,现有的监督方法严重依赖于大量的像素注释。为了解决这个问题,我们提出了一种半监督低频语义分割方法,它只需要一小部分标注数据,并能利用低频差异信息。首先,我们设计了一个无监督差异估计网络,它可以确定每个视图的差异图。在只有中心视图标签的情况下,我们利用估算出的差异图为周边视图生成伪标签及其权重图。然后,我们合并来自多个视图的预测结果,从而为无标签数据获取更可靠的伪标签,并引入差异-语义一致性损失来加强结构相似性。此外,我们还开发了一种全面的对比学习方案,其中包括用于增强特征表征的像素级策略和用于改进单个物体分割的物体级策略。在各种训练设置下,我们的方法在基准 LF 语义分割数据集上表现出了最先进的性能,并且在 1/2 协议下训练时取得了与监督方法相当的性能。
{"title":"Semi-Supervised Semantic Segmentation for Light Field Images Using Disparity Information","authors":"Shansi Zhang;Yaping Zhao;Edmund Y. Lam","doi":"10.1109/TIP.2024.3441930","DOIUrl":"10.1109/TIP.2024.3441930","url":null,"abstract":"Light field (LF) images enable numerous applications due to their ability to capture information for multiple views. Semantic segmentation is an essential task for LF scene understanding. However, existing supervised methods heavily rely on a large number of pixel-wise annotations. To relieve this problem, we propose a semi-supervised LF semantic segmentation method that requires only a small subset of labeled data and harnesses the LF disparity information. First, we design an unsupervised disparity estimation network, which can determine the disparity map for every view. With the estimated disparity maps, we generate pseudo-labels along with their weight maps for the peripheral views when only the labels of central views are available. We then merge the predictions from multiple views to obtain more reliable pseudo-labels for unlabeled data, and introduce a disparity-semantics consistency loss to enforce structure similarity. Moreover, we develop a comprehensive contrastive learning scheme that includes a pixel-level strategy to enhance feature representations and an object-level strategy to improve segmentation for individual objects. Our method demonstrates state-of-the-art performance on the benchmark LF semantic segmentation dataset under a variety of training settings and achieves comparable performance to supervised methods when trained under 1/2 protocol.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141992475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1