首页 > 最新文献

Image and Vision Computing最新文献

英文 中文
Diff-STAR: Exploring student-teacher adaptive reconstruction through diffusion-based generation for image harmonization Diff-STAR:通过基于扩散的生成探索师生自适应重建,以实现图像协调
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-06 DOI: 10.1016/j.imavis.2024.105254
An Cao , Gang Shen

Image harmonization aims to seamlessly integrate foreground and background elements from distinct photos into a visually realistic composite. However, achieving high-quality image composition remains challenging in adjusting color balance, retaining fine details, and ensuring perceptual consistency. This article introduces a novel approach named Diffusion-based Student-Teacher Adaptive Reconstruction (Diff-STAR) to address foreground adjustment by framing it as an image reconstruction task. Leveraging natural photographs for model pretraining eliminates the need for data augmentation within Diff-STAR's framework. Employing the pre-trained Denoising Diffusion Implicit Model (DDIM) enhances photorealism and fidelity in generating high-quality outputs from reconstructed latent representations. By effectively identifying similarities in low-frequency style and semantic relationships across various regions within latent images, we develop a student-teacher architecture combining Transformer encoders and decoders to predict adaptively masked patches derived through diffusion processes. Evaluated on the public datasets, including iHarmony4 and RealHM, the experiment results confirm Diff-STAR's superiority over other state-of-the-art approaches based on metrics including Mean Squared Error (MSE) and Peak Signal-to-noise ratio (PSNR).

图像协调的目的是将不同照片的前景和背景元素无缝整合到视觉逼真的合成图像中。然而,要实现高质量的图像合成,在调整色彩平衡、保留精细细节和确保感知一致性方面仍具有挑战性。本文介绍了一种名为 "基于扩散的学生-教师自适应重构(Diff-STAR)"的新方法,将前景调整作为一项图像重构任务来处理。在 Diff-STAR 框架内,利用自然照片进行模型预训练,无需进行数据扩增。采用预训练的去噪扩散隐含模型(DDIM)可增强逼真度和保真度,从重建的潜在表征中生成高质量输出。通过有效识别潜图像中不同区域的低频风格和语义关系的相似性,我们开发了一种学生-教师架构,将变换器编码器和解码器结合起来,预测通过扩散过程得出的自适应遮蔽补丁。实验结果在 iHarmony4 和 RealHM 等公共数据集上进行了评估,根据平均平方误差 (MSE) 和峰值信噪比 (PSNR) 等指标,证实 Diff-STAR 优于其他最先进的方法。
{"title":"Diff-STAR: Exploring student-teacher adaptive reconstruction through diffusion-based generation for image harmonization","authors":"An Cao ,&nbsp;Gang Shen","doi":"10.1016/j.imavis.2024.105254","DOIUrl":"10.1016/j.imavis.2024.105254","url":null,"abstract":"<div><p>Image harmonization aims to seamlessly integrate foreground and background elements from distinct photos into a visually realistic composite. However, achieving high-quality image composition remains challenging in adjusting color balance, retaining fine details, and ensuring perceptual consistency. This article introduces a novel approach named Diffusion-based Student-Teacher Adaptive Reconstruction (Diff-STAR) to address foreground adjustment by framing it as an image reconstruction task. Leveraging natural photographs for model pretraining eliminates the need for data augmentation within Diff-STAR's framework. Employing the pre-trained Denoising Diffusion Implicit Model (DDIM) enhances photorealism and fidelity in generating high-quality outputs from reconstructed latent representations. By effectively identifying similarities in low-frequency style and semantic relationships across various regions within latent images, we develop a student-teacher architecture combining Transformer encoders and decoders to predict adaptively masked patches derived through diffusion processes. Evaluated on the public datasets, including iHarmony4 and RealHM, the experiment results confirm Diff-STAR's superiority over other state-of-the-art approaches based on metrics including Mean Squared Error (MSE) and Peak Signal-to-noise ratio (PSNR).</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105254"},"PeriodicalIF":4.2,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142242399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-shot class incremental learning via prompt transfer and knowledge distillation 通过及时迁移和知识提炼,实现少儿班增量学习
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 DOI: 10.1016/j.imavis.2024.105251
Feidu Akmel , Fanman Meng , Mingyu Liu , Runtong Zhang , Asebe Teka , Elias Lemuye

The ability of a model to learn incrementally from very limited data while still retaining knowledge about previously seen classes is called few-shot incremental learning. The challenge of the few-shot learning model is data overfitting while the challenge of incremental learning models is catastrophic forgetting. To address these problems, we propose a distillation algorithm coupled with prompting, which effectively addresses the problem encountered in few-shot class-incremental learning by facilitating the transfer of distilled knowledge from a source to a target prompt. Furthermore, we employ a feature embedding module that monitors the semantic similarity between the input labels and the semantic vectors. This enables the learners to receive additional guidance, thereby mitigating the occurrence of catastrophic forgetting and overfitting. As our third contribution, we introduce an attention-based knowledge distillation method that learns relative similarities between features by creating effective links between teacher and student. This enables the regulation of the distillation intensities of all potential pairs between teacher and student. To validate the effectiveness of our proposed method, we conducted extensive experiments on diverse datasets, including miniImageNet, CIFAR100, and CUB200. The results of these experiments demonstrated that our method achieves state-of-the-art performance.

一种模型能够从非常有限的数据中进行增量学习,同时仍能保留有关先前所见类别的知识,这种能力被称为少点增量学习(few-shot incremental learning)。少量学习模型面临的挑战是数据过拟合,而增量学习模型面临的挑战是灾难性遗忘。为了解决这些问题,我们提出了一种与提示相结合的蒸馏算法,它通过促进蒸馏知识从源到目标提示的转移,有效地解决了少次类增量学习中遇到的问题。此外,我们还采用了一个特征嵌入模块,用于监测输入标签与语义向量之间的语义相似性。这使学习者能够获得额外的指导,从而减少灾难性遗忘和过度拟合的发生。第三个贡献是,我们引入了一种基于注意力的知识提炼方法,该方法通过在教师和学生之间建立有效联系来学习特征之间的相对相似性。这样就能调节教师和学生之间所有潜在配对的提炼强度。为了验证我们提出的方法的有效性,我们在不同的数据集上进行了广泛的实验,包括 miniImageNet、CIFAR100 和 CUB200。这些实验结果表明,我们的方法达到了最先进的性能。
{"title":"Few-shot class incremental learning via prompt transfer and knowledge distillation","authors":"Feidu Akmel ,&nbsp;Fanman Meng ,&nbsp;Mingyu Liu ,&nbsp;Runtong Zhang ,&nbsp;Asebe Teka ,&nbsp;Elias Lemuye","doi":"10.1016/j.imavis.2024.105251","DOIUrl":"10.1016/j.imavis.2024.105251","url":null,"abstract":"<div><p>The ability of a model to learn incrementally from very limited data while still retaining knowledge about previously seen classes is called few-shot incremental learning. The challenge of the few-shot learning model is data overfitting while the challenge of incremental learning models is catastrophic forgetting. To address these problems, we propose a distillation algorithm coupled with prompting, which effectively addresses the problem encountered in few-shot class-incremental learning by facilitating the transfer of distilled knowledge from a source to a target prompt. Furthermore, we employ a feature embedding module that monitors the semantic similarity between the input labels and the semantic vectors. This enables the learners to receive additional guidance, thereby mitigating the occurrence of catastrophic forgetting and overfitting. As our third contribution, we introduce an attention-based knowledge distillation method that learns relative similarities between features by creating effective links between teacher and student. This enables the regulation of the distillation intensities of all potential pairs between teacher and student. To validate the effectiveness of our proposed method, we conducted extensive experiments on diverse datasets, including miniImageNet, CIFAR100, and CUB200. The results of these experiments demonstrated that our method achieves state-of-the-art performance.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105251"},"PeriodicalIF":4.2,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-branch underwater image enhancement network via multiscale neighborhood interaction attention learning 通过多尺度邻域交互注意学习实现双分支水下图像增强网络
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1016/j.imavis.2024.105256
Xun Ji , Xu Wang , Na Leng , Li-Ying Hao , Hui Guo

Due to the light scattering and absorption, underwater images inevitably suffer from diverse quality degradation, including color distortion, low contrast, and blurred details. To address the problems, we present a dual-branch convolutional neural network via multiscale neighborhood interaction attention learning for underwater image enhancement. Specifically, the proposed network is trained by an ensemble of locally-aware and globally-aware branches processed in parallel, where the locally-aware branch with stronger representation ability aims to recover high-frequency local details sufficiently, and the globally-aware branch with weaker learning ability aims to prevent information loss in low-frequency global structure effectively. On the other hand, we develop a plug-and-play multiscale neighborhood interaction attention module, which further enhances image quality through appropriate cross-channel interactions with inputs from different receptive fields. Compared with the well-received methods, extensive experiments on both real-world and synthetic underwater images reveal that our proposed network can achieve superior color and contrast enhancement in terms of subjective visual perception and objective evaluation metrics. Ablation study is also conducted to demonstrate the effectiveness of each component in the network.

由于光的散射和吸收,水下图像不可避免地会出现各种质量问题,包括色彩失真、对比度低和细节模糊。为了解决这些问题,我们提出了一种通过多尺度邻域交互注意力学习实现水下图像增强的双分支卷积神经网络。具体来说,我们提出的网络是通过并行处理的局部感知分支和全局感知分支的集合来训练的,其中局部感知分支的表征能力较强,旨在充分恢复高频局部细节,而全局感知分支的学习能力较弱,旨在有效防止低频全局结构的信息丢失。另一方面,我们开发了一个即插即用的多尺度邻域交互注意模块,通过与来自不同感受野的输入进行适当的跨通道交互,进一步提高图像质量。与广受好评的方法相比,在真实世界和合成水下图像上进行的大量实验表明,我们提出的网络能在主观视觉感知和客观评价指标方面实现卓越的色彩和对比度增强。此外,还进行了消融研究,以证明网络中每个组件的有效性。
{"title":"Dual-branch underwater image enhancement network via multiscale neighborhood interaction attention learning","authors":"Xun Ji ,&nbsp;Xu Wang ,&nbsp;Na Leng ,&nbsp;Li-Ying Hao ,&nbsp;Hui Guo","doi":"10.1016/j.imavis.2024.105256","DOIUrl":"10.1016/j.imavis.2024.105256","url":null,"abstract":"<div><p>Due to the light scattering and absorption, underwater images inevitably suffer from diverse quality degradation, including color distortion, low contrast, and blurred details. To address the problems, we present a dual-branch convolutional neural network via multiscale neighborhood interaction attention learning for underwater image enhancement. Specifically, the proposed network is trained by an ensemble of locally-aware and globally-aware branches processed in parallel, where the locally-aware branch with stronger representation ability aims to recover high-frequency local details sufficiently, and the globally-aware branch with weaker learning ability aims to prevent information loss in low-frequency global structure effectively. On the other hand, we develop a plug-and-play multiscale neighborhood interaction attention module, which further enhances image quality through appropriate cross-channel interactions with inputs from different receptive fields. Compared with the well-received methods, extensive experiments on both real-world and synthetic underwater images reveal that our proposed network can achieve superior color and contrast enhancement in terms of subjective visual perception and objective evaluation metrics. Ablation study is also conducted to demonstrate the effectiveness of each component in the network.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105256"},"PeriodicalIF":4.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142158381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive graph reasoning network for object detection 用于物体检测的自适应图推理网络
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1016/j.imavis.2024.105248
Xinfang Zhong , Wenlan Kuang , Zhixin Li

In recent years, Transformer-based object detection has achieved leaps and bounds in performance. Nevertheless, these methods still face some problems such as difficulty in detecting heavy occluded objects and tiny objects. Besides, the mainstream object detection paradigms usually deal with region proposals alone, without considering contextual information and the relationships between objects, which results in limited improvement. In this paper, we propose an Adaptive Graph Reasoning Network (AGRN) that explores the relationships between specific objects in an image and mines high-level semantic information via GCN to enrich visual features. Firstly, to enhance the semantic correlation between objects, a cross-scale semantic-aware module is proposed to realize the semantic interaction between feature maps of different scales so as to obtain a cross-scale semantic feature. Secondly, we activate the instance features in the image and combine the cross-scale semantic feature to create a dynamic graph. Finally, guided by the specific semantics, the attention mechanism is introduced to focus on the corresponding critical regions. On the MS-COCO 2017 dataset, our method improves the performance by 3.9% box AP and 3.6% mask AP in object detection and instance segmentation respectively relative to baseline. Additionally, our model has demonstrated exceptional performance on the PASCAL VOC dataset.

近年来,基于变换器的物体检测在性能上取得了飞跃性的进步。然而,这些方法仍面临一些问题,如难以检测重度遮挡物体和微小物体。此外,主流的物体检测范式通常只处理区域建议,而不考虑上下文信息和物体之间的关系,因此改进有限。本文提出了一种自适应图推理网络(AGRN),它能探索图像中特定物体之间的关系,并通过 GCN 挖掘高层语义信息,从而丰富视觉特征。首先,为了增强物体之间的语义关联性,本文提出了一个跨尺度语义感知模块,以实现不同尺度特征图之间的语义交互,从而获得跨尺度语义特征。其次,激活图像中的实例特征,结合跨尺度语义特征创建动态图。最后,在特定语义的引导下,引入注意力机制,聚焦相应的关键区域。在 MS-COCO 2017 数据集上,与基线相比,我们的方法在物体检测和实例分割方面的性能分别提高了 3.9% box AP 和 3.6% mask AP。此外,我们的模型在 PASCAL VOC 数据集上也表现出了卓越的性能。
{"title":"Adaptive graph reasoning network for object detection","authors":"Xinfang Zhong ,&nbsp;Wenlan Kuang ,&nbsp;Zhixin Li","doi":"10.1016/j.imavis.2024.105248","DOIUrl":"10.1016/j.imavis.2024.105248","url":null,"abstract":"<div><p>In recent years, Transformer-based object detection has achieved leaps and bounds in performance. Nevertheless, these methods still face some problems such as difficulty in detecting heavy occluded objects and tiny objects. Besides, the mainstream object detection paradigms usually deal with region proposals alone, without considering contextual information and the relationships between objects, which results in limited improvement. In this paper, we propose an Adaptive Graph Reasoning Network (AGRN) that explores the relationships between specific objects in an image and mines high-level semantic information via GCN to enrich visual features. Firstly, to enhance the semantic correlation between objects, a cross-scale semantic-aware module is proposed to realize the semantic interaction between feature maps of different scales so as to obtain a cross-scale semantic feature. Secondly, we activate the instance features in the image and combine the cross-scale semantic feature to create a dynamic graph. Finally, guided by the specific semantics, the attention mechanism is introduced to focus on the corresponding critical regions. On the MS-COCO 2017 dataset, our method improves the performance by 3.9% box AP and 3.6% mask AP in object detection and instance segmentation respectively relative to baseline. Additionally, our model has demonstrated exceptional performance on the PASCAL VOC dataset.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105248"},"PeriodicalIF":4.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0262885624003536/pdfft?md5=c327d5634e930b5455fb578d65af5bcf&pid=1-s2.0-S0262885624003536-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient masked feature and group attention network for stereo image super-resolution 用于立体图像超分辨率的高效屏蔽特征和群体关注网络
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1016/j.imavis.2024.105252
Jianwen Song , Arcot Sowmya , Jien Kato , Changming Sun

Current stereo image super-resolution methods do not fully exploit cross-view and intra-view information, resulting in limited performance. While vision transformers have shown great potential in super-resolution, their application in stereo image super-resolution is hindered by high computational demands and insufficient channel interaction. This paper introduces an efficient masked feature and group attention network for stereo image super-resolution (EMGSSR) designed to integrate the strengths of transformers into stereo super-resolution while addressing their inherent limitations. Specifically, an efficient masked feature block is proposed to extract local features from critical areas within images, guided by sparse masks. A group-weighted cross-attention module consisting of group-weighted cross-view feature interactions along epipolar lines is proposed to fully extract cross-view information from stereo images. Additionally, a group-weighted self-attention module consisting of group-weighted self-attention feature extractions with different local windows is proposed to effectively extract intra-view information from stereo images. Experimental results demonstrate that the proposed EMGSSR outperforms state-of-the-art methods at relatively low computational costs. The proposed EMGSSR offers a robust solution that effectively extracts cross-view and intra-view information for stereo image super-resolution, bringing a promising direction for future research in high-fidelity stereo image super-resolution. Source codes will be released at https://github.com/jianwensong/EMGSSR.

目前的立体图像超分辨率方法没有充分利用跨视角和视内信息,导致性能有限。虽然视觉变换器在超分辨率方面已显示出巨大潜力,但其在立体图像超分辨率中的应用却因计算要求高和通道交互不足而受到阻碍。本文介绍了一种用于立体图像超分辨率的高效遮蔽特征和群体注意力网络(EMGSSR),旨在将变换器的优势整合到立体超分辨率中,同时解决其固有的局限性。具体来说,本文提出了一个高效的掩码特征块,在稀疏掩码的引导下,从图像的关键区域提取局部特征。为了从立体图像中全面提取跨视角信息,提出了一个由沿外极线的组加权跨视角特征交互组成的组加权跨视角关注模块。此外,还提出了一个由不同局部窗口的群加权自注意特征提取组成的群加权自注意模块,以有效提取立体图像中的视图内信息。实验结果表明,所提出的 EMGSSR 以相对较低的计算成本超越了最先进的方法。所提出的 EMGSSR 为立体图像超分辨率提供了一种能有效提取跨视角和视角内信息的稳健解决方案,为未来高保真立体图像超分辨率的研究带来了一个很有前景的方向。源代码将在 https://github.com/jianwensong/EMGSSR 上发布。
{"title":"Efficient masked feature and group attention network for stereo image super-resolution","authors":"Jianwen Song ,&nbsp;Arcot Sowmya ,&nbsp;Jien Kato ,&nbsp;Changming Sun","doi":"10.1016/j.imavis.2024.105252","DOIUrl":"10.1016/j.imavis.2024.105252","url":null,"abstract":"<div><p>Current stereo image super-resolution methods do not fully exploit cross-view and intra-view information, resulting in limited performance. While vision transformers have shown great potential in super-resolution, their application in stereo image super-resolution is hindered by high computational demands and insufficient channel interaction. This paper introduces an efficient masked feature and group attention network for stereo image super-resolution (EMGSSR) designed to integrate the strengths of transformers into stereo super-resolution while addressing their inherent limitations. Specifically, an efficient masked feature block is proposed to extract local features from critical areas within images, guided by sparse masks. A group-weighted cross-attention module consisting of group-weighted cross-view feature interactions along epipolar lines is proposed to fully extract cross-view information from stereo images. Additionally, a group-weighted self-attention module consisting of group-weighted self-attention feature extractions with different local windows is proposed to effectively extract intra-view information from stereo images. Experimental results demonstrate that the proposed EMGSSR outperforms state-of-the-art methods at relatively low computational costs. The proposed EMGSSR offers a robust solution that effectively extracts cross-view and intra-view information for stereo image super-resolution, bringing a promising direction for future research in high-fidelity stereo image super-resolution. Source codes will be released at <span><span>https://github.com/jianwensong/EMGSSR</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105252"},"PeriodicalIF":4.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0262885624003573/pdfft?md5=f16b8e31aca64b2993c5abd2e28251d5&pid=1-s2.0-S0262885624003573-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight hash-directed global perception and self-calibrated multiscale fusion network for image super-resolution 用于图像超分辨率的轻量级哈希定向全局感知和自校准多尺度融合网络
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1016/j.imavis.2024.105255
Zhisheng Cui , Yibing Yao , Shilong Li , Yongcan Zhao , Ming Xin

In recent years, with the increase in the depth and width of convolutional neural networks, single image super-resolution (SISR) algorithms have made significant breakthroughs in objective quantitative metrics and subjective visual quality. However, these operations have inevitably caused model inference time to surge. In order to find a balance between model speed and accuracy, we propose a lightweight hash-directed global perception and self-calibrated multiscale fusion network for image Super-Resolution (HSNet) in this paper. The HSNet makes the following two main improvements: first, the Hash-Directed Global Perception module (HDGP) designed in this paper is able to capture the dependencies between features in a global perspective by using the hash encoding to direct the attention mechanism. Second, the Self-Calibrated Multiscale Fusion module (SCMF) proposed in this paper has two independent task branches: the upper branch of the SCMF utilizes the feature fusion module to capture multiscale contextual information, while the lower branch focuses on local details through a small convolutional kernel. These two branches are fused with each other to effectively enhance the network's multiscale understanding capability. Extensive experimental results demonstrate the remarkable superiority of our approach over other state-of-the-art methods in both subjective visual effects and objective evaluation metrics, including PSNR, SSIM, and computational complexity.

近年来,随着卷积神经网络深度和宽度的增加,单图像超分辨率(SISR)算法在客观量化指标和主观视觉质量方面都取得了重大突破。然而,这些操作不可避免地导致模型推理时间激增。为了在模型速度和精度之间找到平衡点,我们在本文中提出了一种轻量级哈希定向全局感知和自校准多尺度融合网络(HSNet)用于图像超分辨率。HSNet 主要做了以下两方面的改进:首先,本文设计的哈希定向全局感知模块(HDGP)能够通过哈希编码引导注意力机制,从全局角度捕捉特征之间的依赖关系。其次,本文提出的自校准多尺度融合模块(SCMF)有两个独立的任务分支:SCMF 的上层分支利用特征融合模块捕捉多尺度上下文信息,下层分支则通过小卷积核关注局部细节。这两个分支相互融合,有效增强了网络的多尺度理解能力。广泛的实验结果表明,我们的方法在主观视觉效果和客观评价指标(包括 PSNR、SSIM 和计算复杂度)方面都明显优于其他最先进的方法。
{"title":"A lightweight hash-directed global perception and self-calibrated multiscale fusion network for image super-resolution","authors":"Zhisheng Cui ,&nbsp;Yibing Yao ,&nbsp;Shilong Li ,&nbsp;Yongcan Zhao ,&nbsp;Ming Xin","doi":"10.1016/j.imavis.2024.105255","DOIUrl":"10.1016/j.imavis.2024.105255","url":null,"abstract":"<div><p>In recent years, with the increase in the depth and width of convolutional neural networks, single image super-resolution (SISR) algorithms have made significant breakthroughs in objective quantitative metrics and subjective visual quality. However, these operations have inevitably caused model inference time to surge. In order to find a balance between model speed and accuracy, we propose a lightweight hash-directed global perception and self-calibrated multiscale fusion network for image Super-Resolution (HSNet) in this paper. The HSNet makes the following two main improvements: first, the Hash-Directed Global Perception module (HDGP) designed in this paper is able to capture the dependencies between features in a global perspective by using the hash encoding to direct the attention mechanism. Second, the Self-Calibrated Multiscale Fusion module (SCMF) proposed in this paper has two independent task branches: the upper branch of the SCMF utilizes the feature fusion module to capture multiscale contextual information, while the lower branch focuses on local details through a small convolutional kernel. These two branches are fused with each other to effectively enhance the network's multiscale understanding capability. Extensive experimental results demonstrate the remarkable superiority of our approach over other state-of-the-art methods in both subjective visual effects and objective evaluation metrics, including PSNR, SSIM, and computational complexity.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105255"},"PeriodicalIF":4.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142158379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D face alignment through fusion of head pose information and features 通过融合头部姿态信息和特征进行三维人脸对准
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 DOI: 10.1016/j.imavis.2024.105253
Jaehyun So , Youngjoon Han

The ability of humans to infer head poses from face shapes, and vice versa, indicates a strong correlation between them. Recent studies on face alignment used head pose information to predict facial landmarks in computer vision tasks. However, many studies have been limited to using head pose information primarily to initialize mean landmarks, as it cannot represent detailed face shapes. To enhance face alignment performance through effective utilization, we introduce a novel approach that integrates head pose information into the feature maps of a face alignment network, rather than simply using it to initialize facial landmarks. Furthermore, the proposed network structure achieves reliable face alignment through a dual-dimensional network. This structure uses multidimensional features such as 2D feature maps and a 3D heatmap to reduce reliance on a single type of feature map and enrich the feature information. We also propose a dense face alignment method through an appended fully connected layer at the end of a dual-dimensional network, trained with sparse face alignment. This method easily trains dense face alignment by directly using predicted keypoints as knowledge and indirectly using semantic information. We experimentally assessed the correlation between the predicted facial landmarks and head pose information, as well as variations in the accuracy of facial landmarks with respect to the quality of head pose information. In addition, we demonstrated the effectiveness of the proposed method through a competitive performance comparison with state-of-the-art methods on the AFLW2000-3D, AFLW, and BIWI datasets. In the evaluation of the face alignment task, we achieved an NME of 3.21 for the AFLW2000-3D and 3.68 for the AFLW dataset.

人类能够从脸部形状推断出头部姿势,反之亦然,这表明两者之间存在很强的相关性。最近关于人脸配准的研究利用头部姿态信息来预测计算机视觉任务中的面部地标。然而,由于头部姿态信息不能代表详细的脸部形状,许多研究仅限于将头部姿态信息主要用于初始化平均地标。为了通过有效利用来提高人脸配准性能,我们引入了一种新方法,将头部姿态信息整合到人脸配准网络的特征图中,而不是简单地将其用于初始化面部地标。此外,所提出的网络结构通过双维网络实现了可靠的人脸配准。这种结构使用二维特征图和三维热图等多维特征,减少了对单一类型特征图的依赖,丰富了特征信息。我们还提出了一种密集人脸配准方法,即在二维网络的末端附加一个全连接层,并用稀疏人脸配准进行训练。这种方法直接使用预测关键点作为知识,间接使用语义信息,从而轻松训练密集人脸配准。我们通过实验评估了预测的面部地标与头部姿势信息之间的相关性,以及面部地标的准确性随头部姿势信息质量的变化。此外,我们还在 AFLW2000-3D、AFLW 和 BIWI 数据集上与最先进的方法进行了性能对比,证明了所提方法的有效性。在人脸配准任务的评估中,我们在 AFLW2000-3D 数据集上取得了 3.21 的 NME 值,在 AFLW 数据集上取得了 3.68 的 NME 值。
{"title":"3D face alignment through fusion of head pose information and features","authors":"Jaehyun So ,&nbsp;Youngjoon Han","doi":"10.1016/j.imavis.2024.105253","DOIUrl":"10.1016/j.imavis.2024.105253","url":null,"abstract":"<div><p>The ability of humans to infer head poses from face shapes, and vice versa, indicates a strong correlation between them. Recent studies on face alignment used head pose information to predict facial landmarks in computer vision tasks. However, many studies have been limited to using head pose information primarily to initialize mean landmarks, as it cannot represent detailed face shapes. To enhance face alignment performance through effective utilization, we introduce a novel approach that integrates head pose information into the feature maps of a face alignment network, rather than simply using it to initialize facial landmarks. Furthermore, the proposed network structure achieves reliable face alignment through a dual-dimensional network. This structure uses multidimensional features such as 2D feature maps and a 3D heatmap to reduce reliance on a single type of feature map and enrich the feature information. We also propose a dense face alignment method through an appended fully connected layer at the end of a dual-dimensional network, trained with sparse face alignment. This method easily trains dense face alignment by directly using predicted keypoints as knowledge and indirectly using semantic information. We experimentally assessed the correlation between the predicted facial landmarks and head pose information, as well as variations in the accuracy of facial landmarks with respect to the quality of head pose information. In addition, we demonstrated the effectiveness of the proposed method through a competitive performance comparison with state-of-the-art methods on the AFLW2000-3D, AFLW, and BIWI datasets. In the evaluation of the face alignment task, we achieved an NME of 3.21 for the AFLW2000-3D and 3.68 for the AFLW dataset.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105253"},"PeriodicalIF":4.2,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0262885624003585/pdfft?md5=9951cd09c51d4f1ecd2222839b6c8209&pid=1-s2.0-S0262885624003585-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distilling OCT cervical dataset with evidential uncertainty proxy 利用证据不确定性代理提炼 OCT 颈椎数据集
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1016/j.imavis.2024.105250
Yuxuan Xiong , Yongchao Xu , Yan Zhang , Bo Du

Deep learning-based OCT image classification method is of paramount importance for early screening of cervical cancer. For the sake of efficiency and privacy, the emerging data distillation technique becomes a promising way to condense the large-scale original OCT dataset into a much smaller synthetic dataset, without losing much information for network training. However, OCT cervical images often suffer from redundancy, mis-operation and noise, etc. These challenges make it hard to compress as much valuable information as possible into extremely small synthesized dataset. To this end, we design an uncertainty-aware distribution matching based dataset distillation framework (UDM). Precisely, we adopt a pre-trained plug-and-play uncertainty estimation proxy to compute classification uncertainty for each data point in the original and synthetic dataset. The estimated uncertainty allows us to adaptively calculate class-wise feature centers of the original and synthetic data, thereby increasing the importance of typical patterns and reducing the impact of redundancy, mis-operation, and noise, etc. Extensive experiments show that our UDM effectively improves distribution-matching-based dataset distillation under both homogeneous and heterogeneous training scenarios.

基于深度学习的 OCT 图像分类方法对于宫颈癌的早期筛查至关重要。为了提高效率和保护隐私,新兴的数据蒸馏技术成为一种很有前景的方法,它能将大规模的原始 OCT 数据集浓缩为更小的合成数据集,同时不会丢失太多用于网络训练的信息。然而,OCT 颈椎图像往往存在冗余、误操作和噪声等问题。这些挑战使得我们很难将尽可能多的有价值信息压缩到极小的合成数据集中。为此,我们设计了一种基于不确定性感知分布匹配的数据集提炼框架(UDM)。确切地说,我们采用了一个预先训练好的即插即用不确定性估计代理,来计算原始数据集和合成数据集中每个数据点的分类不确定性。通过估计不确定性,我们可以自适应地计算原始数据和合成数据的分类特征中心,从而提高典型模式的重要性,减少冗余、误操作和噪声等的影响。大量实验表明,无论是在同质还是异质训练场景下,我们的 UDM 都能有效改进基于分布匹配的数据集提炼。
{"title":"Distilling OCT cervical dataset with evidential uncertainty proxy","authors":"Yuxuan Xiong ,&nbsp;Yongchao Xu ,&nbsp;Yan Zhang ,&nbsp;Bo Du","doi":"10.1016/j.imavis.2024.105250","DOIUrl":"10.1016/j.imavis.2024.105250","url":null,"abstract":"<div><p>Deep learning-based OCT image classification method is of paramount importance for early screening of cervical cancer. For the sake of efficiency and privacy, the emerging data distillation technique becomes a promising way to condense the large-scale original OCT dataset into a much smaller synthetic dataset, without losing much information for network training. However, OCT cervical images often suffer from redundancy, mis-operation and noise, <em>etc.</em> These challenges make it hard to compress as much valuable information as possible into extremely small synthesized dataset. To this end, we design an uncertainty-aware distribution matching based dataset distillation framework (UDM). Precisely, we adopt a pre-trained plug-and-play uncertainty estimation proxy to compute classification uncertainty for each data point in the original and synthetic dataset. The estimated uncertainty allows us to adaptively calculate class-wise feature centers of the original and synthetic data, thereby increasing the importance of typical patterns and reducing the impact of redundancy, mis-operation, and noise, <em>etc.</em> Extensive experiments show that our UDM effectively improves distribution-matching-based dataset distillation under both homogeneous and heterogeneous training scenarios.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105250"},"PeriodicalIF":4.2,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the synergy between textual identity and visual signals in human-object interaction 探索人-物互动中文字标识与视觉信号的协同作用
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1016/j.imavis.2024.105249
Pinzhu An, Zhi Tan

Human-Object Interaction (HOI) detection task aims to recognize and understand interactions between humans and objects depicted in images. Unlike instance recognition tasks, which focus on isolated objects, HOI detection requires considering various explanatory factors, such as instance identity, spatial relationships, and scene context. However, previous HOI detection methods have primarily relied on local visual cues, often overlooking the vital role of instance identity and thus limiting the performance of models. In this paper, we introduce textual features to expand the definition of HOI representations, incorporating instance identity into the HOI reasoning process. Drawing inspiration from the human activity perception process, we explore the synergy between textual identity and visual signals to leverage various explanatory factors more effectively and enhance HOI detection performance. Specifically, our method extracts HOI explanatory factors using both modal representations. Visual features capture interactive cues, while textual features explicitly denote instance identities within human-object pairs, delineating relevant interaction categories. Additionally, we utilize Contrastive Language-Image Pre-training (CLIP) to enhance the semantic alignment between visual and textual features and design a cross-modal learning module for integrating HOI multimodal information. Extensive experiments on several benchmarks demonstrate that our proposed framework surpasses most existing methods, achieving outstanding performance with a mean average precision (mAP) of 33.95 on the HICO-DET dataset and 63.2 mAP on the V-COCO dataset.

人-物互动(HOI)检测任务旨在识别和理解图像中描绘的人与物体之间的互动。与专注于孤立物体的实例识别任务不同,HOI 检测需要考虑各种解释因素,如实例身份、空间关系和场景背景。然而,以往的 HOI 检测方法主要依赖于局部视觉线索,往往忽略了实例身份的重要作用,从而限制了模型的性能。在本文中,我们引入了文本特征来扩展 HOI 表征的定义,将实例身份纳入 HOI 推理过程。我们从人类活动感知过程中汲取灵感,探索文本特征与视觉信号之间的协同作用,从而更有效地利用各种解释因素,提高 HOI 检测性能。具体来说,我们的方法利用两种模式表征提取 HOI 解释因素。视觉特征捕捉交互线索,而文本特征则明确表示人-物对中的实例身份,从而划分出相关的交互类别。此外,我们还利用对比语言-图像预训练(CLIP)来增强视觉和文本特征之间的语义一致性,并设计了一个用于整合 HOI 多模态信息的跨模态学习模块。在多个基准上进行的广泛实验表明,我们提出的框架超越了大多数现有方法,在 HICO-DET 数据集上取得了 33.95 的平均精确度 (mAP) 和在 V-COCO 数据集上取得了 63.2 的平均精确度 (mAP) 的优异成绩。
{"title":"Exploring the synergy between textual identity and visual signals in human-object interaction","authors":"Pinzhu An,&nbsp;Zhi Tan","doi":"10.1016/j.imavis.2024.105249","DOIUrl":"10.1016/j.imavis.2024.105249","url":null,"abstract":"<div><p>Human-Object Interaction (HOI) detection task aims to recognize and understand interactions between humans and objects depicted in images. Unlike instance recognition tasks, which focus on isolated objects, HOI detection requires considering various explanatory factors, such as instance identity, spatial relationships, and scene context. However, previous HOI detection methods have primarily relied on local visual cues, often overlooking the vital role of instance identity and thus limiting the performance of models. In this paper, we introduce textual features to expand the definition of HOI representations, incorporating instance identity into the HOI reasoning process. Drawing inspiration from the human activity perception process, we explore the synergy between textual identity and visual signals to leverage various explanatory factors more effectively and enhance HOI detection performance. Specifically, our method extracts HOI explanatory factors using both modal representations. Visual features capture interactive cues, while textual features explicitly denote instance identities within human-object pairs, delineating relevant interaction categories. Additionally, we utilize Contrastive Language-Image Pre-training (CLIP) to enhance the semantic alignment between visual and textual features and design a cross-modal learning module for integrating HOI multimodal information. Extensive experiments on several benchmarks demonstrate that our proposed framework surpasses most existing methods, achieving outstanding performance with a mean average precision (mAP) of 33.95 on the HICO-DET dataset and 63.2 mAP on the V-COCO dataset.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105249"},"PeriodicalIF":4.2,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142158382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-SF: An encoding-based privacy-preserving segmentation framework for medical images 隐私-SF:基于编码的医学图像隐私保护分割框架
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1016/j.imavis.2024.105246
Long Chen , Li Song , Haiyu Feng , Rediet Tesfaye Zeru , Senchun Chai , Enjun Zhu

Deep learning is becoming increasingly popular and is being extensively used in the field of medical image analysis. However, the privacy sensitivity of medical data limits the availability of data, which constrains the advancement of medical image analysis and impedes collaboration across multiple centers. To address this problem, we propose a novel encoding-based framework, named Privacy-SF, aimed at implementing privacy-preserving segmentation for medical images. Our proposed segmentation framework consists of three CNN networks: 1) two encoding networks on the client side that encode medical images and their corresponding segmentation masks individually to remove the privacy features, 2) a unique mapping network that analyzes the content of encoded data and learns the mapping from the encoded image to the encoded mask. By sequentially encoding data and optimizing the mapping network, our approach ensures privacy protection for images and masks during both the training and inference phases of medical image analysis. Additionally, to further improve the segmentation performance, we carefully design augmentation strategies specifically for encoded data based on its sequence nature. Extensive experiments conducted on five datasets with different modalities demonstrate excellent performance in privacy-preserving segmentation and multi-center collaboration. Furthermore, the analysis of encoded data and the experiment of model inversion attacks validate the privacy-preserving capability of our approach.

深度学习正变得越来越流行,并被广泛应用于医学图像分析领域。然而,医疗数据的隐私敏感性限制了数据的可用性,从而制约了医学图像分析的发展,并阻碍了多个中心之间的合作。为了解决这个问题,我们提出了一种新颖的基于编码的框架,名为 "隐私-SF",旨在实现医疗图像的隐私保护分割。我们提出的分割框架由三个 CNN 网络组成:1)客户端的两个编码网络,分别对医学图像及其相应的分割掩码进行编码,以去除隐私特征;2)一个独特的映射网络,用于分析编码数据的内容,并学习从编码图像到编码掩码的映射。通过顺序编码数据和优化映射网络,我们的方法可确保在医学图像分析的训练和推理阶段保护图像和掩码的隐私。此外,为了进一步提高分割性能,我们还根据编码数据的序列特性,精心设计了专门针对编码数据的增强策略。我们在五个不同模式的数据集上进行了广泛的实验,证明了在保护隐私的分割和多中心协作方面的卓越性能。此外,对编码数据的分析和模型反转攻击实验也验证了我们方法的隐私保护能力。
{"title":"Privacy-SF: An encoding-based privacy-preserving segmentation framework for medical images","authors":"Long Chen ,&nbsp;Li Song ,&nbsp;Haiyu Feng ,&nbsp;Rediet Tesfaye Zeru ,&nbsp;Senchun Chai ,&nbsp;Enjun Zhu","doi":"10.1016/j.imavis.2024.105246","DOIUrl":"10.1016/j.imavis.2024.105246","url":null,"abstract":"<div><p>Deep learning is becoming increasingly popular and is being extensively used in the field of medical image analysis. However, the privacy sensitivity of medical data limits the availability of data, which constrains the advancement of medical image analysis and impedes collaboration across multiple centers. To address this problem, we propose a novel encoding-based framework, named Privacy-SF, aimed at implementing privacy-preserving segmentation for medical images. Our proposed segmentation framework consists of three CNN networks: 1) two encoding networks on the client side that encode medical images and their corresponding segmentation masks individually to remove the privacy features, 2) a unique mapping network that analyzes the content of encoded data and learns the mapping from the encoded image to the encoded mask. By sequentially encoding data and optimizing the mapping network, our approach ensures privacy protection for images and masks during both the training and inference phases of medical image analysis. Additionally, to further improve the segmentation performance, we carefully design augmentation strategies specifically for encoded data based on its sequence nature. Extensive experiments conducted on five datasets with different modalities demonstrate excellent performance in privacy-preserving segmentation and multi-center collaboration. Furthermore, the analysis of encoded data and the experiment of model inversion attacks validate the privacy-preserving capability of our approach.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"151 ","pages":"Article 105246"},"PeriodicalIF":4.2,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142158376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Image and Vision Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1