首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
Contrastive learning for deep tone mapping operator 深度音调映射算子的对比学习
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-29 DOI: 10.1016/j.image.2024.117130
Di Li , Mou Wang , Susanto Rahardja

Most existing tone mapping operators (TMOs) are developed based on prior assumptions of human visual system, and they are known to be sensitive to hyperparameters. In this paper, we proposed a straightforward yet efficient framework to automatically learn the priors and perform tone mapping in an end-to-end manner. The proposed algorithm utilizes a contrastive learning framework to enforce the content consistency between high dynamic range (HDR) inputs and low dynamic range (LDR) outputs. Since contrastive learning aims at maximizing the mutual information across different domains, no paired images or labels are required in our algorithm. Equipped with an attention-based U-Net to alleviate the aliasing and halo artifacts, our algorithm can produce sharp and visually appealing images over various complex real-world scenes, indicating that the proposed algorithm can be used as a strong baseline for future HDR image tone mapping task. Extensive experiments as well as subjective evaluations demonstrated that the proposed algorithm outperforms the existing state-of-the-art algorithms qualitatively and quantitatively. The code is available at https://github.com/xslidi/CATMO.

大多数现有的色调映射算子(TMO)都是基于人类视觉系统的先验假设开发的,众所周知,它们对超参数很敏感。在本文中,我们提出了一个简单而高效的框架,用于自动学习先验,并以端到端的方式执行音调映射。所提出的算法利用对比学习框架,在高动态范围(HDR)输入和低动态范围(LDR)输出之间实现内容一致性。由于对比学习旨在最大化不同领域的互信息,因此我们的算法不需要配对图像或标签。我们的算法配备了基于注意力的 U-Net,可减轻混叠和光晕伪影,能在各种复杂的真实世界场景中生成清晰且具有视觉吸引力的图像,这表明所提出的算法可作为未来 HDR 图像色调映射任务的有力基准。广泛的实验和主观评价表明,所提出的算法在质量和数量上都优于现有的最先进算法。代码见 https://github.com/xslidi/CATMO。
{"title":"Contrastive learning for deep tone mapping operator","authors":"Di Li ,&nbsp;Mou Wang ,&nbsp;Susanto Rahardja","doi":"10.1016/j.image.2024.117130","DOIUrl":"https://doi.org/10.1016/j.image.2024.117130","url":null,"abstract":"<div><p>Most existing tone mapping operators (TMOs) are developed based on prior assumptions of human visual system, and they are known to be sensitive to hyperparameters. In this paper, we proposed a straightforward yet efficient framework to automatically learn the priors and perform tone mapping in an end-to-end manner. The proposed algorithm utilizes a contrastive learning framework to enforce the content consistency between high dynamic range (HDR) inputs and low dynamic range (LDR) outputs. Since contrastive learning aims at maximizing the mutual information across different domains, no paired images or labels are required in our algorithm. Equipped with an attention-based U-Net to alleviate the aliasing and halo artifacts, our algorithm can produce sharp and visually appealing images over various complex real-world scenes, indicating that the proposed algorithm can be used as a strong baseline for future HDR image tone mapping task. Extensive experiments as well as subjective evaluations demonstrated that the proposed algorithm outperforms the existing state-of-the-art algorithms qualitatively and quantitatively. The code is available at <span>https://github.com/xslidi/CATMO</span><svg><path></path></svg>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"126 ","pages":"Article 117130"},"PeriodicalIF":3.5,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140894882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
U-ATSS: A lightweight and accurate one-stage underwater object detection network U-ATSS:轻量级、精确的单级水下物体探测网络
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-28 DOI: 10.1016/j.image.2024.117137
Junjun Wu, Jinpeng Chen, Qinghua Lu, Jiaxi Li, Ningwei Qin, Kaixuan Chen, Xilin Liu

Due to the harsh and unknown marine environment and the limited diving ability of human beings, underwater robots become an important role in ocean exploration and development. However, the performance of underwater robots is limited by blurred images, low contrast and color deviation, which are resulted from complex underwater imaging environments. The existing mainstream object detection networks perform poorly when applied directly to underwater tasks. Although using a cascaded detector network can get high accuracy, the inference speed is too slow to apply to actual tasks. To address the above problems, this paper proposes a lightweight and accurate one-stage underwater object detection network, called U-ATSS. Firstly, we compressed the backbone of ATSS to significantly reduce the number of network parameters and improve the inference speed without losing the detection accuracy, to achieve lightweight and real-time performance of the underwater object detection network. Then, we propose a plug-and-play receptive field module F-ASPP, which can obtain larger receptive fields and richer spatial information, and optimize the learning rate strategy as well as classification loss function to significantly improve the detection accuracy and convergence speed. We evaluated and compared U-ATSS with other methods on the Kesci Underwater Object Detection Algorithm Competition dataset containing a variety of marine organisms. The experimental results show that U-ATSS not only has obvious lightweight characteristics, but also shows excellent performance and competitiveness in terms of detection accuracy.

由于海洋环境的恶劣和未知,以及人类有限的潜水能力,水下机器人在海洋探索和开发中发挥着重要作用。然而,复杂的水下成像环境导致的图像模糊、对比度低和色彩偏差等问题限制了水下机器人的性能。现有的主流物体检测网络在直接应用于水下任务时表现不佳。使用级联检测器网络虽然可以获得较高的精度,但推理速度太慢,无法应用于实际任务。针对上述问题,本文提出了一种轻量级、高精度的单级水下物体检测网络,称为 U-ATSS。首先,我们压缩了 ATSS 的骨干网,在不损失检测精度的前提下大幅减少了网络参数数量,提高了推理速度,实现了水下物体检测网络的轻量化和实时性。然后,我们提出了即插即用的感受野模块 F-ASPP,它可以获得更大的感受野和更丰富的空间信息,并优化了学习率策略和分类损失函数,显著提高了检测精度和收敛速度。我们在包含多种海洋生物的 Kesci 水下物体检测算法竞赛数据集上对 U-ATSS 和其他方法进行了评估和比较。实验结果表明,U-ATSS 不仅具有明显的轻量级特征,而且在检测精度方面也表现出优异的性能和竞争力。
{"title":"U-ATSS: A lightweight and accurate one-stage underwater object detection network","authors":"Junjun Wu,&nbsp;Jinpeng Chen,&nbsp;Qinghua Lu,&nbsp;Jiaxi Li,&nbsp;Ningwei Qin,&nbsp;Kaixuan Chen,&nbsp;Xilin Liu","doi":"10.1016/j.image.2024.117137","DOIUrl":"https://doi.org/10.1016/j.image.2024.117137","url":null,"abstract":"<div><p>Due to the harsh and unknown marine environment and the limited diving ability of human beings, underwater robots become an important role in ocean exploration and development. However, the performance of underwater robots is limited by blurred images, low contrast and color deviation, which are resulted from complex underwater imaging environments. The existing mainstream object detection networks perform poorly when applied directly to underwater tasks. Although using a cascaded detector network can get high accuracy, the inference speed is too slow to apply to actual tasks. To address the above problems, this paper proposes a lightweight and accurate one-stage underwater object detection network, called U-ATSS. Firstly, we compressed the backbone of ATSS to significantly reduce the number of network parameters and improve the inference speed without losing the detection accuracy, to achieve lightweight and real-time performance of the underwater object detection network. Then, we propose a plug-and-play receptive field module F-ASPP, which can obtain larger receptive fields and richer spatial information, and optimize the learning rate strategy as well as classification loss function to significantly improve the detection accuracy and convergence speed. We evaluated and compared U-ATSS with other methods on the Kesci Underwater Object Detection Algorithm Competition dataset containing a variety of marine organisms. The experimental results show that U-ATSS not only has obvious lightweight characteristics, but also shows excellent performance and competitiveness in terms of detection accuracy.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"126 ","pages":"Article 117137"},"PeriodicalIF":3.5,"publicationDate":"2024-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140906614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image splicing detection using low-dimensional feature vector of texture features and Haralick features based on Gray Level Co-occurrence Matrix 使用基于灰度级共现矩阵的纹理特征和哈拉里克特征的低维特征向量进行图像拼接检测
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-27 DOI: 10.1016/j.image.2024.117134
Debjit Das, Ruchira Naskar

Digital image forgery has become hugely widespread, as numerous easy-to-use, low-cost image manipulation tools have become widely available to the common masses. Such forged images can be used with various malicious intentions, such as to harm the social reputation of renowned personalities, to perform identity fraud resulting in financial disasters, and many more illegitimate activities. Image splicing is a form of image forgery where an adversary intelligently combines portions from multiple source images to generate a natural-looking artificial image. Detection of image splicing attacks poses an open challenge in the forensic domain, and in recent literature, several significant research findings on image splicing detection have been described. However, the number of features documented in such works is significantly huge. Our aim in this work is to address the issue of feature set optimization while modeling image splicing detection as a classification problem and preserving the forgery detection efficiency reported in the state-of-the-art. This paper proposes an image-splicing detection scheme based on textural features and Haralick features computed from the input image’s Gray Level Co-occurrence Matrix (GLCM) and also localizes the spliced regions in a detected spliced image. We have explored the well-known Columbia Image Splicing Detection Evaluation Dataset and the DSO-1 dataset, which is more challenging because of its constituent post-processed color images. Experimental results prove that our proposed model obtains 95% accuracy in image splicing detection with an AUC score of 0.99, with an optimized feature set of dimensionality of 15 only.

数字图像伪造已经变得非常普遍,因为普通大众已经可以广泛获得大量易于使用、成本低廉的图像处理工具。这些伪造图像可被用于各种恶意目的,如损害知名人士的社会声誉、进行身份欺诈造成经济损失,以及其他许多非法活动。图像拼接是图像伪造的一种形式,敌方通过智能方式将多个源图像中的部分组合在一起,生成看起来自然的人工图像。图像拼接攻击的检测是取证领域的一项公开挑战,在最近的文献中,已经介绍了一些关于图像拼接检测的重要研究成果。然而,这些研究中记录的特征数量非常庞大。我们的目标是在将图像拼接检测建模为分类问题的同时,解决特征集优化问题,并保持最先进的伪造检测效率。本文提出了一种基于输入图像灰度共现矩阵(GLCM)计算出的纹理特征和哈拉里克特征的图像拼接检测方案,并对检测到的拼接图像中的拼接区域进行定位。我们对著名的哥伦比亚图像拼接检测评估数据集和 DSO-1 数据集进行了探索,DSO-1 数据集更具挑战性,因为它包含了经过后处理的彩色图像。实验结果证明,我们提出的模型在图像拼接检测中的准确率达到 95%,AUC 得分为 0.99,而优化特征集的维数仅为 15。
{"title":"Image splicing detection using low-dimensional feature vector of texture features and Haralick features based on Gray Level Co-occurrence Matrix","authors":"Debjit Das,&nbsp;Ruchira Naskar","doi":"10.1016/j.image.2024.117134","DOIUrl":"https://doi.org/10.1016/j.image.2024.117134","url":null,"abstract":"<div><p><em>Digital image forgery</em> has become hugely widespread, as numerous easy-to-use, low-cost image manipulation tools have become widely available to the common masses. Such forged images can be used with various malicious intentions, such as to harm the social reputation of renowned personalities, to perform identity fraud resulting in financial disasters, and many more illegitimate activities. <em>Image splicing</em> is a form of image forgery where an adversary intelligently combines portions from multiple source images to generate a natural-looking artificial image. Detection of image splicing attacks poses an open challenge in the forensic domain, and in recent literature, several significant research findings on image splicing detection have been described. However, the number of features documented in such works is significantly huge. Our aim in this work is to address the issue of feature set optimization while modeling image splicing detection as a classification problem and preserving the forgery detection efficiency reported in the state-of-the-art. This paper proposes an image-splicing detection scheme based on textural features and Haralick features computed from the input image’s Gray Level Co-occurrence Matrix (GLCM) and also localizes the spliced regions in a detected spliced image. We have explored the well-known Columbia Image Splicing Detection Evaluation Dataset and the DSO-1 dataset, which is more challenging because of its constituent post-processed color images. Experimental results prove that our proposed model obtains 95% accuracy in image splicing detection with an AUC score of 0.99, with an optimized feature set of dimensionality of 15 only.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"125 ","pages":"Article 117134"},"PeriodicalIF":3.5,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140816438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A flow-based multi-scale learning network for single image stochastic super-resolution 基于流量的多尺度学习网络,用于单图像随机超分辨率
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-24 DOI: 10.1016/j.image.2024.117132
Qianyu Wu , Zhongqian Hu , Aichun Zhu , Hui Tang , Jiaxin Zou , Yan Xi , Yang Chen

Single image super-resolution (SISR) is still an important while challenging task. Existing methods usually ignore the diversity of generated Super-Resolution (SR) images. The fine details of the corresponding high-resolution (HR) images cannot be confidently recovered due to the degradation of detail in low-resolution (LR) images. To address the above issue, this paper presents a flow-based multi-scale learning network (FMLnet) to explore the diverse mapping spaces for SR. First, we propose a multi-scale learning block (MLB) to extract the underlying features of the LR image. Second, the introduced pixel-wise multi-head attention allows our model to map multiple representation subspaces simultaneously. Third, by employing a normalizing flow module for a given LR input, our approach generates various stochastic SR outputs with high visual quality. The trade-off between fidelity and perceptual quality can be controlled. Finally, the experimental results on five datasets demonstrate that the proposed network outperforms the existing methods in terms of diversity, and achieves competitive PSNR/SSIM results. Code is available at https://github.com/qianyuwu/FMLnet.

单幅图像超分辨率(SISR)仍然是一项重要而又具有挑战性的任务。现有方法通常会忽略生成的超分辨率(SR)图像的多样性。由于低分辨率(LR)图像的细节退化,相应的高分辨率(HR)图像的精细细节无法可靠地恢复。为解决上述问题,本文提出了一种基于流的多尺度学习网络(FMLnet)来探索 SR 的不同映射空间。首先,我们提出了一个多尺度学习块(MLB)来提取 LR 图像的底层特征。其次,引入的像素多头注意力使我们的模型能够同时映射多个表示子空间。第三,通过对给定的 LR 输入采用归一化流模块,我们的方法可以生成各种具有高视觉质量的随机 SR 输出。保真度和感知质量之间的权衡是可以控制的。最后,在五个数据集上的实验结果表明,所提出的网络在多样性方面优于现有方法,并取得了具有竞争力的 PSNR/SSIM 结果。代码见 https://github.com/qianyuwu/FMLnet。
{"title":"A flow-based multi-scale learning network for single image stochastic super-resolution","authors":"Qianyu Wu ,&nbsp;Zhongqian Hu ,&nbsp;Aichun Zhu ,&nbsp;Hui Tang ,&nbsp;Jiaxin Zou ,&nbsp;Yan Xi ,&nbsp;Yang Chen","doi":"10.1016/j.image.2024.117132","DOIUrl":"10.1016/j.image.2024.117132","url":null,"abstract":"<div><p>Single image super-resolution (SISR) is still an important while challenging task. Existing methods usually ignore the diversity of generated Super-Resolution (SR) images. The fine details of the corresponding high-resolution (HR) images cannot be confidently recovered due to the degradation of detail in low-resolution (LR) images. To address the above issue, this paper presents a flow-based multi-scale learning network (FMLnet) to explore the diverse mapping spaces for SR. First, we propose a multi-scale learning block (MLB) to extract the underlying features of the LR image. Second, the introduced pixel-wise multi-head attention allows our model to map multiple representation subspaces simultaneously. Third, by employing a normalizing flow module for a given LR input, our approach generates various stochastic SR outputs with high visual quality. The trade-off between fidelity and perceptual quality can be controlled. Finally, the experimental results on five datasets demonstrate that the proposed network outperforms the existing methods in terms of diversity, and achieves competitive PSNR/SSIM results. Code is available at <span>https://github.com/qianyuwu/FMLnet</span><svg><path></path></svg>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"125 ","pages":"Article 117132"},"PeriodicalIF":3.5,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140760491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S2CANet: A self-supervised infrared and visible image fusion based on co-attention network S2CANet:基于共同关注网络的自监督红外和可见光图像融合
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-21 DOI: 10.1016/j.image.2024.117131
Dongyang Li , Rencan Nie , Jinde Cao , Gucheng Zhang , Biaojian Jin

Existing methods for infrared and visible image fusion (IVIF) often overlook the analysis of common and distinct features among source images. Consequently, this study develops A self-supervised infrared and visible image fusion based on co-attention network, incorporating auxiliary networks and backbone networks in its design. The primary concept is to transform both common and distinct features into common features and reconstructed features, subsequently deriving the distinct features through their subtraction. To enhance the similarity of common features, we designed the fusion block based on co-attention (FBC) module specifically for this purpose, capturing common features through co-attention. Moreover, fine-tuning the auxiliary network enhances the image reconstruction effectiveness of the backbone network. It is noteworthy that the auxiliary network is exclusively employed during training to guide the self-supervised completion of IVIF by the backbone network. Additionally, we introduce a novel estimate for weighted fidelity loss to guide the fused image in preserving more brightness from the source image. Experiments conducted on diverse benchmark datasets demonstrate the superior performance of our S2CANet over state-of-the-art IVIF methods.

现有的红外与可见光图像融合(IVIF)方法往往忽略了对源图像之间共同特征和不同特征的分析。因此,本研究开发了一种基于共同关注网络的自监督红外与可见光图像融合方法,并在其设计中加入了辅助网络和骨干网络。其主要概念是将共同特征和不同特征转化为共同特征和重建特征,然后通过减法得出不同特征。为了增强共同特征的相似性,我们专门设计了基于协同关注的融合块(FBC)模块,通过协同关注来捕捉共同特征。此外,对辅助网络进行微调可增强骨干网络的图像重建效果。值得注意的是,辅助网络在训练过程中专门用于指导骨干网络在自我监督下完成 IVIF。此外,我们还引入了一种新的加权保真度损失估计,以指导融合图像保留源图像的更多亮度。在各种基准数据集上进行的实验证明,我们的 S2CANet 比最先进的 IVIF 方法性能更优越。
{"title":"S2CANet: A self-supervised infrared and visible image fusion based on co-attention network","authors":"Dongyang Li ,&nbsp;Rencan Nie ,&nbsp;Jinde Cao ,&nbsp;Gucheng Zhang ,&nbsp;Biaojian Jin","doi":"10.1016/j.image.2024.117131","DOIUrl":"10.1016/j.image.2024.117131","url":null,"abstract":"<div><p>Existing methods for infrared and visible image fusion (IVIF) often overlook the analysis of common and distinct features among source images. Consequently, this study develops A self-supervised infrared and visible image fusion based on co-attention network, incorporating auxiliary networks and backbone networks in its design. The primary concept is to transform both common and distinct features into common features and reconstructed features, subsequently deriving the distinct features through their subtraction. To enhance the similarity of common features, we designed the fusion block based on co-attention (FBC) module specifically for this purpose, capturing common features through co-attention. Moreover, fine-tuning the auxiliary network enhances the image reconstruction effectiveness of the backbone network. It is noteworthy that the auxiliary network is exclusively employed during training to guide the self-supervised completion of IVIF by the backbone network. Additionally, we introduce a novel estimate for weighted fidelity loss to guide the fused image in preserving more brightness from the source image. Experiments conducted on diverse benchmark datasets demonstrate the superior performance of our S2CANet over state-of-the-art IVIF methods.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"125 ","pages":"Article 117131"},"PeriodicalIF":3.5,"publicationDate":"2024-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140796369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CEDR: Contrastive Embedding Distribution Refinement for 3D point cloud representation CEDR:用于三维点云表示的对比嵌入式分布细化技术
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-12 DOI: 10.1016/j.image.2024.117129
Feng Yang , Yichao Cao , Qifan Xue , Shuai Jin , Xuanpeng Li , Weigong Zhang

The distinguishable deep features are essential for the 3D point cloud recognition as they influence the search for the optimal classifier. Most existing point cloud classification methods mainly focus on local information aggregation while ignoring the feature distribution of the whole dataset that indicates more informative and intrinsic semantic relationships of labeled data, if better exploited, which could learn more distinguishing inter-class features. Our work attempts to construct a more distinguishable feature space through performing feature distribution refinement inspired by contrastive learning and sample mining strategies, without modifying the model architecture. To explore the full potential of feature distribution refinement, two modules are involved to boost exceptionally distributed samples distinguishability in an adaptive manner: (i) Confusion-Prone Classes Mining (CPCM) module is aimed at hard-to-distinct classes, which alleviates the massive category-level confusion by generating class-level soft labels; (ii) Entropy-Aware Attention (EAA) mechanism is proposed to remove influence of the trivial cases which could substantially weaken model performance. Our method achieves competitive results on multiple applications of point cloud. In particular, our method gets 85.8% accuracy on ScanObjectNN, and substantial performance gains up to 2.7% in DCGNN, 3.1% in PointNet++, and 2.4% in GBNet. Our code is available at https://github.com/YangFengSEU/CEDR.

可区分的深度特征对三维点云识别至关重要,因为它们会影响最佳分类器的搜索。现有的大多数点云分类方法主要侧重于局部信息聚合,而忽略了整个数据集的特征分布,而这一特征分布显示了标签数据更多的信息和内在语义关系,如果能更好地加以利用,就能学习到更多区分类间的特征。我们的工作试图在不修改模型架构的情况下,受对比学习和样本挖掘策略的启发,通过对特征分布进行细化来构建一个更具区分度的特征空间。为了充分挖掘特征分布细化的潜力,我们采用了两个模块,以自适应性的方式提高异常分布样本的可区分性:(i) 易混淆类挖掘(CPCM)模块针对难以区分的类,通过生成类级软标签来缓解大量的类级混淆;(ii) 提出了熵感注意(EAA)机制,以消除琐碎情况的影响,因为琐碎情况会大大削弱模型性能。我们的方法在点云的多个应用中取得了有竞争力的结果。特别是,我们的方法在 ScanObjectNN 上获得了 85.8% 的准确率,在 DCGNN、PointNet++ 和 GBNet 上分别获得了 2.7% 、3.1% 和 2.4% 的显著性能提升。我们的代码见 https://github.com/YangFengSEU/CEDR。
{"title":"CEDR: Contrastive Embedding Distribution Refinement for 3D point cloud representation","authors":"Feng Yang ,&nbsp;Yichao Cao ,&nbsp;Qifan Xue ,&nbsp;Shuai Jin ,&nbsp;Xuanpeng Li ,&nbsp;Weigong Zhang","doi":"10.1016/j.image.2024.117129","DOIUrl":"10.1016/j.image.2024.117129","url":null,"abstract":"<div><p>The distinguishable deep features are essential for the 3D point cloud recognition as they influence the search for the optimal classifier. Most existing point cloud classification methods mainly focus on local information aggregation while ignoring the feature distribution of the whole dataset that indicates more informative and intrinsic semantic relationships of labeled data, if better exploited, which could learn more distinguishing inter-class features. Our work attempts to construct a more distinguishable feature space through performing feature distribution refinement inspired by contrastive learning and sample mining strategies, without modifying the model architecture. To explore the full potential of feature distribution refinement, two modules are involved to boost exceptionally distributed samples distinguishability in an adaptive manner: (i) Confusion-Prone Classes Mining (CPCM) module is aimed at hard-to-distinct classes, which alleviates the massive category-level confusion by generating class-level soft labels; (ii) Entropy-Aware Attention (EAA) mechanism is proposed to remove influence of the trivial cases which could substantially weaken model performance. Our method achieves competitive results on multiple applications of point cloud. In particular, our method gets 85.8% accuracy on ScanObjectNN, and substantial performance gains up to 2.7% in DCGNN, 3.1% in PointNet++, and 2.4% in GBNet. Our code is available at <span>https://github.com/YangFengSEU/CEDR</span><svg><path></path></svg>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"125 ","pages":"Article 117129"},"PeriodicalIF":3.5,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140767552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image clustering using generated text centroids 使用生成的文本中心点进行图像聚类
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-12 DOI: 10.1016/j.image.2024.117128
Daehyeon Kong , Kyeongbo Kong , Suk-Ju Kang

In recent years, deep neural networks pretrained on large-scale datasets have been used to address data deficiency and achieve better performance through prior knowledge. Contrastive language–image pretraining (CLIP), a vision-language model pretrained on an extensive dataset, achieves better performance in image recognition. In this study, we harness the power of multimodality in image clustering tasks, shifting from a single modality to a multimodal framework using the describability property of image encoder of the CLIP model. The importance of this shift lies in the ability of multimodality to provide richer feature representations. By generating text centroids corresponding to image features, we effectively create a common descriptive language for each cluster. It generates text centroids assigned by the image features and improves the clustering performance. The text centroids use the results generated by using the standard clustering algorithm as a pseudo-label and learn a common description of each cluster. Finally, only text centroids were added when the image features on the same space were assigned to the text centroids, but the clustering performance improved significantly compared to the standard clustering algorithm, especially on complex datasets. When the proposed method is applied, the normalized mutual information score rises by 32% on the Stanford40 dataset and 64% on ImageNet-Dog compared to the k-means clustering algorithm.

近年来,在大规模数据集上进行预训练的深度神经网络被用于解决数据不足的问题,并通过先验知识获得更好的性能。对比语言-图像预训练(CLIP)是一种在大规模数据集上预训练的视觉语言模型,在图像识别中取得了更好的性能。在本研究中,我们在图像聚类任务中利用了多模态的力量,利用 CLIP 模型图像编码器的可描述性,从单一模态框架转向多模态框架。这种转变的重要性在于多模态能够提供更丰富的特征表示。通过生成与图像特征相对应的文本中心点,我们有效地为每个集群创建了一种通用的描述语言。它能生成由图像特征指定的文本中心点,并提高聚类性能。文本中心点使用标准聚类算法生成的结果作为伪标签,并学习每个聚类的通用描述。最后,在将同一空间的图像特征分配给文本中心点时,只添加了文本中心点,但聚类性能与标准聚类算法相比有了显著提高,尤其是在复杂数据集上。与 k-means 聚类算法相比,应用提出的方法后,归一化互信息得分在 Stanford40 数据集上提高了 32%,在 ImageNet-Dog 上提高了 64%。
{"title":"Image clustering using generated text centroids","authors":"Daehyeon Kong ,&nbsp;Kyeongbo Kong ,&nbsp;Suk-Ju Kang","doi":"10.1016/j.image.2024.117128","DOIUrl":"https://doi.org/10.1016/j.image.2024.117128","url":null,"abstract":"<div><p>In recent years, deep neural networks pretrained on large-scale datasets have been used to address data deficiency and achieve better performance through prior knowledge. Contrastive language–image pretraining (CLIP), a vision-language model pretrained on an extensive dataset, achieves better performance in image recognition. In this study, we harness the power of multimodality in image clustering tasks, shifting from a single modality to a multimodal framework using the describability property of image encoder of the CLIP model. The importance of this shift lies in the ability of multimodality to provide richer feature representations. By generating text centroids corresponding to image features, we effectively create a common descriptive language for each cluster. It generates text centroids assigned by the image features and improves the clustering performance. The text centroids use the results generated by using the standard clustering algorithm as a pseudo-label and learn a common description of each cluster. Finally, only text centroids were added when the image features on the same space were assigned to the text centroids, but the clustering performance improved significantly compared to the standard clustering algorithm, especially on complex datasets. When the proposed method is applied, the normalized mutual information score rises by 32% on the Stanford40 dataset and 64% on ImageNet-Dog compared to the <span><math><mi>k</mi></math></span>-means clustering algorithm.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"125 ","pages":"Article 117128"},"PeriodicalIF":3.5,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140555139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explicit3D: Graph network with spatial inference for single image 3D object detection Explicit3D:具有空间推理功能的图网络,用于单图像三维物体检测
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-30 DOI: 10.1016/j.image.2024.117120
Yanjun Liu, Wenming Yang

Indoor 3D object detection is an essential task in single image scene understanding, impacting spatial cognition fundamentally in visual reasoning. Existing works on 3D object detection from a single image either pursue this goal through independent predictions of each object or implicitly reason over all possible objects, failing to harness relational geometric information between objects. To address this problem, we propose a sparse graph-based pipeline named Explicit3D based on object geometry and semantics features. Taking the efficiency into consideration, we further define a relatedness score and design a novel dynamic pruning method via group sampling for sparse scene graph generation and updating. Furthermore, our Explicit3D introduces homogeneous matrices and defines new relative loss and corner loss to model the spatial difference between target pairs explicitly. Instead of using ground-truth labels as direct supervision, our relative and corner loss are derived from homogeneous transforms, which renders the model to learn the geometric consistency between objects. The experimental results on the SUN RGB-D dataset demonstrate that our Explicit3D achieves better performance balance than the-state-of-the-art.

室内三维物体检测是单幅图像场景理解中的一项重要任务,对视觉推理中的空间认知有着根本性的影响。现有的单幅图像三维物体检测方法要么是通过对每个物体进行独立预测来实现这一目标,要么是对所有可能的物体进行隐式推理,无法利用物体之间的几何关系信息。为了解决这个问题,我们提出了一种基于稀疏图的管道,命名为基于物体几何和语义特征的 Explicit3D。考虑到效率问题,我们进一步定义了相关性得分,并设计了一种新颖的动态剪枝方法,通过分组采样来生成和更新稀疏场景图。此外,我们的 Explicit3D 还引入了同质矩阵,并定义了新的相对损失和角损失,以明确模拟目标对之间的空间差异。我们的相对损失和边角损失不是使用地面真实标签作为直接监督,而是从同质变换中导出,从而使模型能够学习物体之间的几何一致性。在 SUN RGB-D 数据集上的实验结果表明,我们的 Explicit3D 比最先进的技术取得了更好的性能平衡。
{"title":"Explicit3D: Graph network with spatial inference for single image 3D object detection","authors":"Yanjun Liu,&nbsp;Wenming Yang","doi":"10.1016/j.image.2024.117120","DOIUrl":"https://doi.org/10.1016/j.image.2024.117120","url":null,"abstract":"<div><p>Indoor 3D object detection is an essential task in single image scene understanding, impacting spatial cognition fundamentally in visual reasoning. Existing works on 3D object detection from a single image either pursue this goal through independent predictions of each object or implicitly reason over all possible objects, failing to harness relational geometric information between objects. To address this problem, we propose a sparse graph-based pipeline named Explicit3D based on object geometry and semantics features. Taking the efficiency into consideration, we further define a relatedness score and design a novel dynamic pruning method via group sampling for sparse scene graph generation and updating. Furthermore, our Explicit3D introduces homogeneous matrices and defines new relative loss and corner loss to model the spatial difference between target pairs explicitly. Instead of using ground-truth labels as direct supervision, our relative and corner loss are derived from homogeneous transforms, which renders the model to learn the geometric consistency between objects. The experimental results on the SUN RGB-D dataset demonstrate that our Explicit3D achieves better performance balance than the-state-of-the-art.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"124 ","pages":"Article 117120"},"PeriodicalIF":3.5,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140533524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A knowledge distillation based deep learning framework for cropped images detection in spatial domain 基于知识提炼的深度学习框架,用于空间域裁剪图像的检测
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-20 DOI: 10.1016/j.image.2024.117117
Israr Hussain , Shunquan Tan , Jiwu Huang

Cropping an image is a common image editing technique that aims to find viewpoints with suitable image composition. It is also a frequently used post-processing technique to reduce the evidence of tampering in an image. Detecting cropped images poses a significant challenge in the field of digital image forensics, as the distortions introduced by image cropping are often imperceptible to the human eye. Although deep neural networks achieve state-of-the-art performance, due to their ability to encode large-scale data and handle billions of model parameters. However, due to their high computational complexity and substantial storage requirements, it is difficult to deploy these large deep learning models on resource-constrained devices such as mobile phones and embedded systems. To address this issue, we propose a lightweight deep learning framework for cropping detection in spatial domain, based on knowledge distillation. Initially, we constructed four datasets containing a total of 60,000 images cropped using various tools. We then used Efficient-Net-B0, pre-trained on ImageNet with significant surgical adjustments, as the teacher model, which makes it more robust and faster to converge in this downstream task. The model was trained on 20,000 cropped and uncropped images from our own dataset, and we then applied its knowledge to a more compact model called the student model. Finally, we selected the best-performing lightweight model as the final prediction model, with a testing accuracy of 98.44% on the test dataset, which outperforms other methods. Extensive experiments demonstrate that our proposed model, distilled from Efficient-Net-B0, achieves state-of-the-art performance in terms of detection accuracy, training parameters, and FLOPs, outperforming existing methods in detecting cropped images.

裁剪图像是一种常见的图像编辑技术,目的是找到具有合适图像构图的视点。它也是一种常用的后处理技术,用于减少图像中被篡改的证据。检测裁剪过的图像是数字图像取证领域的一项重大挑战,因为人眼通常无法察觉图像裁剪带来的扭曲。虽然深度神经网络能够对大规模数据进行编码并处理数十亿个模型参数,因此能够实现最先进的性能。然而,由于其计算复杂度高、存储要求高,很难在资源受限的设备(如手机和嵌入式系统)上部署这些大型深度学习模型。为了解决这个问题,我们提出了一种基于知识提炼的轻量级深度学习框架,用于空间领域的裁剪检测。最初,我们构建了四个数据集,共包含 60,000 张使用各种工具裁剪的图片。然后,我们使用在 ImageNet 上预先训练并经过重大手术调整的 Efficient-Net-B0 作为教师模型,这使其在此下游任务中更加稳健,收敛速度更快。该模型在我们自己数据集中的 20,000 张裁剪过和未裁剪过的图像上进行了训练,然后我们将其知识应用于一个更紧凑的模型,即学生模型。最后,我们选择了表现最好的轻量级模型作为最终预测模型,其在测试数据集上的测试准确率为 98.44%,优于其他方法。大量实验证明,我们从 Efficient-Net-B0 中提炼出的模型在检测准确率、训练参数和 FLOPs 方面都达到了最先进的水平,在检测裁剪图像方面优于现有方法。
{"title":"A knowledge distillation based deep learning framework for cropped images detection in spatial domain","authors":"Israr Hussain ,&nbsp;Shunquan Tan ,&nbsp;Jiwu Huang","doi":"10.1016/j.image.2024.117117","DOIUrl":"10.1016/j.image.2024.117117","url":null,"abstract":"<div><p>Cropping an image is a common image editing technique that aims to find viewpoints with suitable image composition. It is also a frequently used post-processing technique to reduce the evidence of tampering in an image. Detecting cropped images poses a significant challenge in the field of digital image forensics, as the distortions introduced by image cropping are often imperceptible to the human eye. Although deep neural networks achieve state-of-the-art performance, due to their ability to encode large-scale data and handle billions of model parameters. However, due to their high computational complexity and substantial storage requirements, it is difficult to deploy these large deep learning models on resource-constrained devices such as mobile phones and embedded systems. To address this issue, we propose a lightweight deep learning framework for cropping detection in spatial domain, based on knowledge distillation. Initially, we constructed four datasets containing a total of 60,000 images cropped using various tools. We then used Efficient-Net-B0, pre-trained on ImageNet with significant surgical adjustments, as the teacher model, which makes it more robust and faster to converge in this downstream task. The model was trained on 20,000 cropped and uncropped images from our own dataset, and we then applied its knowledge to a more compact model called the student model. Finally, we selected the best-performing lightweight model as the final prediction model, with a testing accuracy of 98.44% on the test dataset, which outperforms other methods. Extensive experiments demonstrate that our proposed model, distilled from Efficient-Net-B0, achieves state-of-the-art performance in terms of detection accuracy, training parameters, and FLOPs, outperforming existing methods in detecting cropped images.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"124 ","pages":"Article 117117"},"PeriodicalIF":3.5,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140271577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A distortion-free authentication method for color images with tampering localization and self-recovery 具有篡改定位和自我恢复功能的彩色图像无失真认证方法
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-15 DOI: 10.1016/j.image.2024.117116
Che-Wei Lee

For immediate access and online backup, cloud storage has become a mainstream way for digital data storage and distribution. To assure images accessed or downloaded from clouds are reliable is critical to storage service providers. In this study, a new distortion-free color image authentication method based on secret sharing, data compression and image interpolation with the tampering recovery capability is proposed. The proposed method generates elaborate authentication signals which have double functions of tampering localization and image repairing. The authentication signals are subsequently converted into many shares with the use of (k, n)-threshold method so as to increase the multiplicity of authentication signals for reinforcing the capability of tampering recovery. These shares are then randomly concealed in the alpha channel of the to-be-protected image that has been transformed into the PNG format containing RGBA channels. In authentication, the authentication signals computed from the alpha channel are not only used for indicating if an image block has been tampered with or not, but used as a signal to find the corresponding color in a predefined palette to recover the tampered image block. Compared with several state-of-the-art methods, the proposed method attains positive properties including losslessness, tampering localization and tampering recovery. Experimental results and discussions on security consideration and comparison with other related methods are provided to demonstrate the outperformance of the proposed method.

为实现即时访问和在线备份,云存储已成为数字数据存储和分发的主流方式。对于存储服务提供商来说,确保从云上访问或下载的图像是可靠的至关重要。本研究提出了一种新的无失真彩色图像认证方法,该方法基于秘密共享、数据压缩和图像插值,并具有篡改恢复能力。该方法能生成精心制作的认证信号,这些信号具有篡改定位和图像修复的双重功能。随后,利用()阈值法将认证信号转换成许多份额,以增加认证信号的多重性,从而增强篡改恢复能力。然后,将这些份额随机隐藏在被转换为包含 RGBA 通道的 PNG 格式的待保护图像的 Alpha 通道中。在验证过程中,从阿尔法信道计算出的验证信号不仅用于指示图像块是否被篡改,还可用作在预定义调色板中找到相应颜色的信号,以恢复被篡改的图像块。与几种最先进的方法相比,所提出的方法具有无损、篡改定位和篡改恢复等积极特性。实验结果、安全性方面的讨论以及与其他相关方法的比较都证明了所提出方法的优越性。
{"title":"A distortion-free authentication method for color images with tampering localization and self-recovery","authors":"Che-Wei Lee","doi":"10.1016/j.image.2024.117116","DOIUrl":"10.1016/j.image.2024.117116","url":null,"abstract":"<div><p>For immediate access and online backup, cloud storage has become a mainstream way for digital data storage and distribution. To assure images accessed or downloaded from clouds are reliable is critical to storage service providers. In this study, a new distortion-free color image authentication method based on secret sharing, data compression and image interpolation with the tampering recovery capability is proposed. The proposed method generates elaborate authentication signals which have double functions of tampering localization and image repairing. The authentication signals are subsequently converted into many shares with the use of (<em>k, n</em>)-threshold method so as to increase the multiplicity of authentication signals for reinforcing the capability of tampering recovery. These shares are then randomly concealed in the alpha channel of the to-be-protected image that has been transformed into the PNG format containing RGBA channels. In authentication, the authentication signals computed from the alpha channel are not only used for indicating if an image block has been tampered with or not, but used as a signal to find the corresponding color in a predefined palette to recover the tampered image block. Compared with several state-of-the-art methods, the proposed method attains positive properties including losslessness, tampering localization and tampering recovery. Experimental results and discussions on security consideration and comparison with other related methods are provided to demonstrate the outperformance of the proposed method.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"124 ","pages":"Article 117116"},"PeriodicalIF":3.5,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1