首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
S2CANet: A self-supervised infrared and visible image fusion based on co-attention network S2CANet:基于共同关注网络的自监督红外和可见光图像融合
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-21 DOI: 10.1016/j.image.2024.117131
Dongyang Li , Rencan Nie , Jinde Cao , Gucheng Zhang , Biaojian Jin

Existing methods for infrared and visible image fusion (IVIF) often overlook the analysis of common and distinct features among source images. Consequently, this study develops A self-supervised infrared and visible image fusion based on co-attention network, incorporating auxiliary networks and backbone networks in its design. The primary concept is to transform both common and distinct features into common features and reconstructed features, subsequently deriving the distinct features through their subtraction. To enhance the similarity of common features, we designed the fusion block based on co-attention (FBC) module specifically for this purpose, capturing common features through co-attention. Moreover, fine-tuning the auxiliary network enhances the image reconstruction effectiveness of the backbone network. It is noteworthy that the auxiliary network is exclusively employed during training to guide the self-supervised completion of IVIF by the backbone network. Additionally, we introduce a novel estimate for weighted fidelity loss to guide the fused image in preserving more brightness from the source image. Experiments conducted on diverse benchmark datasets demonstrate the superior performance of our S2CANet over state-of-the-art IVIF methods.

现有的红外与可见光图像融合(IVIF)方法往往忽略了对源图像之间共同特征和不同特征的分析。因此,本研究开发了一种基于共同关注网络的自监督红外与可见光图像融合方法,并在其设计中加入了辅助网络和骨干网络。其主要概念是将共同特征和不同特征转化为共同特征和重建特征,然后通过减法得出不同特征。为了增强共同特征的相似性,我们专门设计了基于协同关注的融合块(FBC)模块,通过协同关注来捕捉共同特征。此外,对辅助网络进行微调可增强骨干网络的图像重建效果。值得注意的是,辅助网络在训练过程中专门用于指导骨干网络在自我监督下完成 IVIF。此外,我们还引入了一种新的加权保真度损失估计,以指导融合图像保留源图像的更多亮度。在各种基准数据集上进行的实验证明,我们的 S2CANet 比最先进的 IVIF 方法性能更优越。
{"title":"S2CANet: A self-supervised infrared and visible image fusion based on co-attention network","authors":"Dongyang Li ,&nbsp;Rencan Nie ,&nbsp;Jinde Cao ,&nbsp;Gucheng Zhang ,&nbsp;Biaojian Jin","doi":"10.1016/j.image.2024.117131","DOIUrl":"10.1016/j.image.2024.117131","url":null,"abstract":"<div><p>Existing methods for infrared and visible image fusion (IVIF) often overlook the analysis of common and distinct features among source images. Consequently, this study develops A self-supervised infrared and visible image fusion based on co-attention network, incorporating auxiliary networks and backbone networks in its design. The primary concept is to transform both common and distinct features into common features and reconstructed features, subsequently deriving the distinct features through their subtraction. To enhance the similarity of common features, we designed the fusion block based on co-attention (FBC) module specifically for this purpose, capturing common features through co-attention. Moreover, fine-tuning the auxiliary network enhances the image reconstruction effectiveness of the backbone network. It is noteworthy that the auxiliary network is exclusively employed during training to guide the self-supervised completion of IVIF by the backbone network. Additionally, we introduce a novel estimate for weighted fidelity loss to guide the fused image in preserving more brightness from the source image. Experiments conducted on diverse benchmark datasets demonstrate the superior performance of our S2CANet over state-of-the-art IVIF methods.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"125 ","pages":"Article 117131"},"PeriodicalIF":3.5,"publicationDate":"2024-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140796369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CEDR: Contrastive Embedding Distribution Refinement for 3D point cloud representation CEDR:用于三维点云表示的对比嵌入式分布细化技术
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-12 DOI: 10.1016/j.image.2024.117129
Feng Yang , Yichao Cao , Qifan Xue , Shuai Jin , Xuanpeng Li , Weigong Zhang

The distinguishable deep features are essential for the 3D point cloud recognition as they influence the search for the optimal classifier. Most existing point cloud classification methods mainly focus on local information aggregation while ignoring the feature distribution of the whole dataset that indicates more informative and intrinsic semantic relationships of labeled data, if better exploited, which could learn more distinguishing inter-class features. Our work attempts to construct a more distinguishable feature space through performing feature distribution refinement inspired by contrastive learning and sample mining strategies, without modifying the model architecture. To explore the full potential of feature distribution refinement, two modules are involved to boost exceptionally distributed samples distinguishability in an adaptive manner: (i) Confusion-Prone Classes Mining (CPCM) module is aimed at hard-to-distinct classes, which alleviates the massive category-level confusion by generating class-level soft labels; (ii) Entropy-Aware Attention (EAA) mechanism is proposed to remove influence of the trivial cases which could substantially weaken model performance. Our method achieves competitive results on multiple applications of point cloud. In particular, our method gets 85.8% accuracy on ScanObjectNN, and substantial performance gains up to 2.7% in DCGNN, 3.1% in PointNet++, and 2.4% in GBNet. Our code is available at https://github.com/YangFengSEU/CEDR.

可区分的深度特征对三维点云识别至关重要,因为它们会影响最佳分类器的搜索。现有的大多数点云分类方法主要侧重于局部信息聚合,而忽略了整个数据集的特征分布,而这一特征分布显示了标签数据更多的信息和内在语义关系,如果能更好地加以利用,就能学习到更多区分类间的特征。我们的工作试图在不修改模型架构的情况下,受对比学习和样本挖掘策略的启发,通过对特征分布进行细化来构建一个更具区分度的特征空间。为了充分挖掘特征分布细化的潜力,我们采用了两个模块,以自适应性的方式提高异常分布样本的可区分性:(i) 易混淆类挖掘(CPCM)模块针对难以区分的类,通过生成类级软标签来缓解大量的类级混淆;(ii) 提出了熵感注意(EAA)机制,以消除琐碎情况的影响,因为琐碎情况会大大削弱模型性能。我们的方法在点云的多个应用中取得了有竞争力的结果。特别是,我们的方法在 ScanObjectNN 上获得了 85.8% 的准确率,在 DCGNN、PointNet++ 和 GBNet 上分别获得了 2.7% 、3.1% 和 2.4% 的显著性能提升。我们的代码见 https://github.com/YangFengSEU/CEDR。
{"title":"CEDR: Contrastive Embedding Distribution Refinement for 3D point cloud representation","authors":"Feng Yang ,&nbsp;Yichao Cao ,&nbsp;Qifan Xue ,&nbsp;Shuai Jin ,&nbsp;Xuanpeng Li ,&nbsp;Weigong Zhang","doi":"10.1016/j.image.2024.117129","DOIUrl":"10.1016/j.image.2024.117129","url":null,"abstract":"<div><p>The distinguishable deep features are essential for the 3D point cloud recognition as they influence the search for the optimal classifier. Most existing point cloud classification methods mainly focus on local information aggregation while ignoring the feature distribution of the whole dataset that indicates more informative and intrinsic semantic relationships of labeled data, if better exploited, which could learn more distinguishing inter-class features. Our work attempts to construct a more distinguishable feature space through performing feature distribution refinement inspired by contrastive learning and sample mining strategies, without modifying the model architecture. To explore the full potential of feature distribution refinement, two modules are involved to boost exceptionally distributed samples distinguishability in an adaptive manner: (i) Confusion-Prone Classes Mining (CPCM) module is aimed at hard-to-distinct classes, which alleviates the massive category-level confusion by generating class-level soft labels; (ii) Entropy-Aware Attention (EAA) mechanism is proposed to remove influence of the trivial cases which could substantially weaken model performance. Our method achieves competitive results on multiple applications of point cloud. In particular, our method gets 85.8% accuracy on ScanObjectNN, and substantial performance gains up to 2.7% in DCGNN, 3.1% in PointNet++, and 2.4% in GBNet. Our code is available at <span>https://github.com/YangFengSEU/CEDR</span><svg><path></path></svg>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"125 ","pages":"Article 117129"},"PeriodicalIF":3.5,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140767552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image clustering using generated text centroids 使用生成的文本中心点进行图像聚类
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-04-12 DOI: 10.1016/j.image.2024.117128
Daehyeon Kong , Kyeongbo Kong , Suk-Ju Kang

In recent years, deep neural networks pretrained on large-scale datasets have been used to address data deficiency and achieve better performance through prior knowledge. Contrastive language–image pretraining (CLIP), a vision-language model pretrained on an extensive dataset, achieves better performance in image recognition. In this study, we harness the power of multimodality in image clustering tasks, shifting from a single modality to a multimodal framework using the describability property of image encoder of the CLIP model. The importance of this shift lies in the ability of multimodality to provide richer feature representations. By generating text centroids corresponding to image features, we effectively create a common descriptive language for each cluster. It generates text centroids assigned by the image features and improves the clustering performance. The text centroids use the results generated by using the standard clustering algorithm as a pseudo-label and learn a common description of each cluster. Finally, only text centroids were added when the image features on the same space were assigned to the text centroids, but the clustering performance improved significantly compared to the standard clustering algorithm, especially on complex datasets. When the proposed method is applied, the normalized mutual information score rises by 32% on the Stanford40 dataset and 64% on ImageNet-Dog compared to the k-means clustering algorithm.

近年来,在大规模数据集上进行预训练的深度神经网络被用于解决数据不足的问题,并通过先验知识获得更好的性能。对比语言-图像预训练(CLIP)是一种在大规模数据集上预训练的视觉语言模型,在图像识别中取得了更好的性能。在本研究中,我们在图像聚类任务中利用了多模态的力量,利用 CLIP 模型图像编码器的可描述性,从单一模态框架转向多模态框架。这种转变的重要性在于多模态能够提供更丰富的特征表示。通过生成与图像特征相对应的文本中心点,我们有效地为每个集群创建了一种通用的描述语言。它能生成由图像特征指定的文本中心点,并提高聚类性能。文本中心点使用标准聚类算法生成的结果作为伪标签,并学习每个聚类的通用描述。最后,在将同一空间的图像特征分配给文本中心点时,只添加了文本中心点,但聚类性能与标准聚类算法相比有了显著提高,尤其是在复杂数据集上。与 k-means 聚类算法相比,应用提出的方法后,归一化互信息得分在 Stanford40 数据集上提高了 32%,在 ImageNet-Dog 上提高了 64%。
{"title":"Image clustering using generated text centroids","authors":"Daehyeon Kong ,&nbsp;Kyeongbo Kong ,&nbsp;Suk-Ju Kang","doi":"10.1016/j.image.2024.117128","DOIUrl":"https://doi.org/10.1016/j.image.2024.117128","url":null,"abstract":"<div><p>In recent years, deep neural networks pretrained on large-scale datasets have been used to address data deficiency and achieve better performance through prior knowledge. Contrastive language–image pretraining (CLIP), a vision-language model pretrained on an extensive dataset, achieves better performance in image recognition. In this study, we harness the power of multimodality in image clustering tasks, shifting from a single modality to a multimodal framework using the describability property of image encoder of the CLIP model. The importance of this shift lies in the ability of multimodality to provide richer feature representations. By generating text centroids corresponding to image features, we effectively create a common descriptive language for each cluster. It generates text centroids assigned by the image features and improves the clustering performance. The text centroids use the results generated by using the standard clustering algorithm as a pseudo-label and learn a common description of each cluster. Finally, only text centroids were added when the image features on the same space were assigned to the text centroids, but the clustering performance improved significantly compared to the standard clustering algorithm, especially on complex datasets. When the proposed method is applied, the normalized mutual information score rises by 32% on the Stanford40 dataset and 64% on ImageNet-Dog compared to the <span><math><mi>k</mi></math></span>-means clustering algorithm.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"125 ","pages":"Article 117128"},"PeriodicalIF":3.5,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140555139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explicit3D: Graph network with spatial inference for single image 3D object detection Explicit3D:具有空间推理功能的图网络,用于单图像三维物体检测
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-30 DOI: 10.1016/j.image.2024.117120
Yanjun Liu, Wenming Yang

Indoor 3D object detection is an essential task in single image scene understanding, impacting spatial cognition fundamentally in visual reasoning. Existing works on 3D object detection from a single image either pursue this goal through independent predictions of each object or implicitly reason over all possible objects, failing to harness relational geometric information between objects. To address this problem, we propose a sparse graph-based pipeline named Explicit3D based on object geometry and semantics features. Taking the efficiency into consideration, we further define a relatedness score and design a novel dynamic pruning method via group sampling for sparse scene graph generation and updating. Furthermore, our Explicit3D introduces homogeneous matrices and defines new relative loss and corner loss to model the spatial difference between target pairs explicitly. Instead of using ground-truth labels as direct supervision, our relative and corner loss are derived from homogeneous transforms, which renders the model to learn the geometric consistency between objects. The experimental results on the SUN RGB-D dataset demonstrate that our Explicit3D achieves better performance balance than the-state-of-the-art.

室内三维物体检测是单幅图像场景理解中的一项重要任务,对视觉推理中的空间认知有着根本性的影响。现有的单幅图像三维物体检测方法要么是通过对每个物体进行独立预测来实现这一目标,要么是对所有可能的物体进行隐式推理,无法利用物体之间的几何关系信息。为了解决这个问题,我们提出了一种基于稀疏图的管道,命名为基于物体几何和语义特征的 Explicit3D。考虑到效率问题,我们进一步定义了相关性得分,并设计了一种新颖的动态剪枝方法,通过分组采样来生成和更新稀疏场景图。此外,我们的 Explicit3D 还引入了同质矩阵,并定义了新的相对损失和角损失,以明确模拟目标对之间的空间差异。我们的相对损失和边角损失不是使用地面真实标签作为直接监督,而是从同质变换中导出,从而使模型能够学习物体之间的几何一致性。在 SUN RGB-D 数据集上的实验结果表明,我们的 Explicit3D 比最先进的技术取得了更好的性能平衡。
{"title":"Explicit3D: Graph network with spatial inference for single image 3D object detection","authors":"Yanjun Liu,&nbsp;Wenming Yang","doi":"10.1016/j.image.2024.117120","DOIUrl":"https://doi.org/10.1016/j.image.2024.117120","url":null,"abstract":"<div><p>Indoor 3D object detection is an essential task in single image scene understanding, impacting spatial cognition fundamentally in visual reasoning. Existing works on 3D object detection from a single image either pursue this goal through independent predictions of each object or implicitly reason over all possible objects, failing to harness relational geometric information between objects. To address this problem, we propose a sparse graph-based pipeline named Explicit3D based on object geometry and semantics features. Taking the efficiency into consideration, we further define a relatedness score and design a novel dynamic pruning method via group sampling for sparse scene graph generation and updating. Furthermore, our Explicit3D introduces homogeneous matrices and defines new relative loss and corner loss to model the spatial difference between target pairs explicitly. Instead of using ground-truth labels as direct supervision, our relative and corner loss are derived from homogeneous transforms, which renders the model to learn the geometric consistency between objects. The experimental results on the SUN RGB-D dataset demonstrate that our Explicit3D achieves better performance balance than the-state-of-the-art.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"124 ","pages":"Article 117120"},"PeriodicalIF":3.5,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140533524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A knowledge distillation based deep learning framework for cropped images detection in spatial domain 基于知识提炼的深度学习框架,用于空间域裁剪图像的检测
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-20 DOI: 10.1016/j.image.2024.117117
Israr Hussain , Shunquan Tan , Jiwu Huang

Cropping an image is a common image editing technique that aims to find viewpoints with suitable image composition. It is also a frequently used post-processing technique to reduce the evidence of tampering in an image. Detecting cropped images poses a significant challenge in the field of digital image forensics, as the distortions introduced by image cropping are often imperceptible to the human eye. Although deep neural networks achieve state-of-the-art performance, due to their ability to encode large-scale data and handle billions of model parameters. However, due to their high computational complexity and substantial storage requirements, it is difficult to deploy these large deep learning models on resource-constrained devices such as mobile phones and embedded systems. To address this issue, we propose a lightweight deep learning framework for cropping detection in spatial domain, based on knowledge distillation. Initially, we constructed four datasets containing a total of 60,000 images cropped using various tools. We then used Efficient-Net-B0, pre-trained on ImageNet with significant surgical adjustments, as the teacher model, which makes it more robust and faster to converge in this downstream task. The model was trained on 20,000 cropped and uncropped images from our own dataset, and we then applied its knowledge to a more compact model called the student model. Finally, we selected the best-performing lightweight model as the final prediction model, with a testing accuracy of 98.44% on the test dataset, which outperforms other methods. Extensive experiments demonstrate that our proposed model, distilled from Efficient-Net-B0, achieves state-of-the-art performance in terms of detection accuracy, training parameters, and FLOPs, outperforming existing methods in detecting cropped images.

裁剪图像是一种常见的图像编辑技术,目的是找到具有合适图像构图的视点。它也是一种常用的后处理技术,用于减少图像中被篡改的证据。检测裁剪过的图像是数字图像取证领域的一项重大挑战,因为人眼通常无法察觉图像裁剪带来的扭曲。虽然深度神经网络能够对大规模数据进行编码并处理数十亿个模型参数,因此能够实现最先进的性能。然而,由于其计算复杂度高、存储要求高,很难在资源受限的设备(如手机和嵌入式系统)上部署这些大型深度学习模型。为了解决这个问题,我们提出了一种基于知识提炼的轻量级深度学习框架,用于空间领域的裁剪检测。最初,我们构建了四个数据集,共包含 60,000 张使用各种工具裁剪的图片。然后,我们使用在 ImageNet 上预先训练并经过重大手术调整的 Efficient-Net-B0 作为教师模型,这使其在此下游任务中更加稳健,收敛速度更快。该模型在我们自己数据集中的 20,000 张裁剪过和未裁剪过的图像上进行了训练,然后我们将其知识应用于一个更紧凑的模型,即学生模型。最后,我们选择了表现最好的轻量级模型作为最终预测模型,其在测试数据集上的测试准确率为 98.44%,优于其他方法。大量实验证明,我们从 Efficient-Net-B0 中提炼出的模型在检测准确率、训练参数和 FLOPs 方面都达到了最先进的水平,在检测裁剪图像方面优于现有方法。
{"title":"A knowledge distillation based deep learning framework for cropped images detection in spatial domain","authors":"Israr Hussain ,&nbsp;Shunquan Tan ,&nbsp;Jiwu Huang","doi":"10.1016/j.image.2024.117117","DOIUrl":"10.1016/j.image.2024.117117","url":null,"abstract":"<div><p>Cropping an image is a common image editing technique that aims to find viewpoints with suitable image composition. It is also a frequently used post-processing technique to reduce the evidence of tampering in an image. Detecting cropped images poses a significant challenge in the field of digital image forensics, as the distortions introduced by image cropping are often imperceptible to the human eye. Although deep neural networks achieve state-of-the-art performance, due to their ability to encode large-scale data and handle billions of model parameters. However, due to their high computational complexity and substantial storage requirements, it is difficult to deploy these large deep learning models on resource-constrained devices such as mobile phones and embedded systems. To address this issue, we propose a lightweight deep learning framework for cropping detection in spatial domain, based on knowledge distillation. Initially, we constructed four datasets containing a total of 60,000 images cropped using various tools. We then used Efficient-Net-B0, pre-trained on ImageNet with significant surgical adjustments, as the teacher model, which makes it more robust and faster to converge in this downstream task. The model was trained on 20,000 cropped and uncropped images from our own dataset, and we then applied its knowledge to a more compact model called the student model. Finally, we selected the best-performing lightweight model as the final prediction model, with a testing accuracy of 98.44% on the test dataset, which outperforms other methods. Extensive experiments demonstrate that our proposed model, distilled from Efficient-Net-B0, achieves state-of-the-art performance in terms of detection accuracy, training parameters, and FLOPs, outperforming existing methods in detecting cropped images.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"124 ","pages":"Article 117117"},"PeriodicalIF":3.5,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140271577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A distortion-free authentication method for color images with tampering localization and self-recovery 具有篡改定位和自我恢复功能的彩色图像无失真认证方法
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-03-15 DOI: 10.1016/j.image.2024.117116
Che-Wei Lee

For immediate access and online backup, cloud storage has become a mainstream way for digital data storage and distribution. To assure images accessed or downloaded from clouds are reliable is critical to storage service providers. In this study, a new distortion-free color image authentication method based on secret sharing, data compression and image interpolation with the tampering recovery capability is proposed. The proposed method generates elaborate authentication signals which have double functions of tampering localization and image repairing. The authentication signals are subsequently converted into many shares with the use of (k, n)-threshold method so as to increase the multiplicity of authentication signals for reinforcing the capability of tampering recovery. These shares are then randomly concealed in the alpha channel of the to-be-protected image that has been transformed into the PNG format containing RGBA channels. In authentication, the authentication signals computed from the alpha channel are not only used for indicating if an image block has been tampered with or not, but used as a signal to find the corresponding color in a predefined palette to recover the tampered image block. Compared with several state-of-the-art methods, the proposed method attains positive properties including losslessness, tampering localization and tampering recovery. Experimental results and discussions on security consideration and comparison with other related methods are provided to demonstrate the outperformance of the proposed method.

为实现即时访问和在线备份,云存储已成为数字数据存储和分发的主流方式。对于存储服务提供商来说,确保从云上访问或下载的图像是可靠的至关重要。本研究提出了一种新的无失真彩色图像认证方法,该方法基于秘密共享、数据压缩和图像插值,并具有篡改恢复能力。该方法能生成精心制作的认证信号,这些信号具有篡改定位和图像修复的双重功能。随后,利用()阈值法将认证信号转换成许多份额,以增加认证信号的多重性,从而增强篡改恢复能力。然后,将这些份额随机隐藏在被转换为包含 RGBA 通道的 PNG 格式的待保护图像的 Alpha 通道中。在验证过程中,从阿尔法信道计算出的验证信号不仅用于指示图像块是否被篡改,还可用作在预定义调色板中找到相应颜色的信号,以恢复被篡改的图像块。与几种最先进的方法相比,所提出的方法具有无损、篡改定位和篡改恢复等积极特性。实验结果、安全性方面的讨论以及与其他相关方法的比较都证明了所提出方法的优越性。
{"title":"A distortion-free authentication method for color images with tampering localization and self-recovery","authors":"Che-Wei Lee","doi":"10.1016/j.image.2024.117116","DOIUrl":"10.1016/j.image.2024.117116","url":null,"abstract":"<div><p>For immediate access and online backup, cloud storage has become a mainstream way for digital data storage and distribution. To assure images accessed or downloaded from clouds are reliable is critical to storage service providers. In this study, a new distortion-free color image authentication method based on secret sharing, data compression and image interpolation with the tampering recovery capability is proposed. The proposed method generates elaborate authentication signals which have double functions of tampering localization and image repairing. The authentication signals are subsequently converted into many shares with the use of (<em>k, n</em>)-threshold method so as to increase the multiplicity of authentication signals for reinforcing the capability of tampering recovery. These shares are then randomly concealed in the alpha channel of the to-be-protected image that has been transformed into the PNG format containing RGBA channels. In authentication, the authentication signals computed from the alpha channel are not only used for indicating if an image block has been tampered with or not, but used as a signal to find the corresponding color in a predefined palette to recover the tampered image block. Compared with several state-of-the-art methods, the proposed method attains positive properties including losslessness, tampering localization and tampering recovery. Experimental results and discussions on security consideration and comparison with other related methods are provided to demonstrate the outperformance of the proposed method.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"124 ","pages":"Article 117116"},"PeriodicalIF":3.5,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial domain adaptation with Siamese network for video object cosegmentation 利用连体网络进行逆向域适应以实现视频对象共分割
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-02-15 DOI: 10.1016/j.image.2024.117109
Li Xu , Yaodong Zhou , Bing Luo , Bo Li , Chao Zhang

Object cosegmentation aims to obtain common objects from multiple images or videos, which performs by employing handcraft features to evaluate region similarity or learning higher semantic information via deep learning. However, the former based on handcraft features is sensitive to illumination, appearance changes and clutter background to the domain gap. The latter based on deep learning needs the groundtruth of object segmentation to train the co-attention model to spotlight the common object regions in different domain. This paper proposes an adversarial domain adaption-based video object cosegmentation method without any pixel-wise supervision. Intuitively, high-level semantic similarity are beneficial for common object recognition. However, there are inconsistency distributions of different video sources, i.e., domain gap. We propose an adversarial learning method to align feature distributions of different videos, which aims to maintain the feature similarity of common objects to overcome the dataset bias. Hence, a feature encoder via Siamese network is constructed to fool a discriminative network to obtain domain adapted feature mapping. To further assist the feature embedding of common objects, we define a latent task for label generation to train a classifying network, which could make full use of high-level semantic information. Experimental results on several video cosegmentation datasets suggest that domain adaption based on adversarial learning could significantly improve the common semantic feature exaction.

物体共分割旨在从多幅图像或视频中获取共同的物体,其方法是利用手工特征来评估区域相似性,或通过深度学习来学习更高级的语义信息。然而,前者基于手工特征,对光照、外观变化和杂波背景敏感,存在领域差距。后者基于深度学习,需要物体分割的基本事实来训练共同关注模型,以聚焦不同领域中的共同物体区域。本文提出了一种基于对抗域自适应的视频物体共分割方法,无需任何像素监督。直观地说,高层次的语义相似性有利于常见物体的识别。然而,不同视频源存在不一致的分布,即域差距。我们提出了一种对抗学习方法来调整不同视频的特征分布,旨在保持常见物体的特征相似性,克服数据集偏差。因此,我们通过连体网络构建了一个特征编码器,以愚弄一个判别网络,从而获得适应领域的特征映射。为了进一步帮助常见对象的特征嵌入,我们定义了一个用于生成标签的潜在任务来训练分类网络,从而充分利用高级语义信息。在多个视频共同分割数据集上的实验结果表明,基于对抗学习的领域自适应可以显著改善常见语义特征的提取。
{"title":"Adversarial domain adaptation with Siamese network for video object cosegmentation","authors":"Li Xu ,&nbsp;Yaodong Zhou ,&nbsp;Bing Luo ,&nbsp;Bo Li ,&nbsp;Chao Zhang","doi":"10.1016/j.image.2024.117109","DOIUrl":"10.1016/j.image.2024.117109","url":null,"abstract":"<div><p>Object cosegmentation aims to obtain common objects from multiple images or videos, which performs by employing handcraft features to evaluate region similarity or learning higher semantic information via deep learning. However, the former based on handcraft features is sensitive to illumination, appearance changes and clutter background to the domain gap. The latter based on deep learning needs the groundtruth of object segmentation to train the co-attention model to spotlight the common object regions in different domain. This paper proposes an adversarial domain adaption-based video object cosegmentation method without any pixel-wise supervision. Intuitively, high-level semantic similarity are beneficial for common object recognition. However, there are inconsistency distributions of different video sources, i.e., domain gap. We propose an adversarial learning method to align feature distributions of different videos, which aims to maintain the feature similarity of common objects to overcome the dataset bias. Hence, a feature encoder via Siamese network is constructed to fool a discriminative network to obtain domain adapted feature mapping. To further assist the feature embedding of common objects, we define a latent task for label generation to train a classifying network, which could make full use of high-level semantic information. Experimental results on several video cosegmentation datasets suggest that domain adaption based on adversarial learning could significantly improve the common semantic feature exaction.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"123 ","pages":"Article 117109"},"PeriodicalIF":3.5,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139897019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction-based coding with rate control for lossless region of interest in pathology imaging 基于预测的编码与速率控制,用于病理成像中的无损感兴趣区
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-22 DOI: 10.1016/j.image.2023.117087
Joan Bartrina-Rapesta , Miguel Hernández-Cabronero , Victor Sanchez , Joan Serra-Sagristà , Pouya Jamshidi , J. Castellani

Online collaborative tools for medical diagnosis produced from digital pathology images have experimented an increase in demand in recent years. Due to the large sizes of pathology images, rate control (RC) techniques that allow an accurate control of compressed file sizes are critical to meet existing bandwidth restrictions while maximizing retrieved image quality. Recently, some RC contributions to Region of Interest (RoI) coding for pathology imaging have been presented. These encode the RoI without loss and the background with some loss, and focus on providing high RC accuracy for the background area. However, none of these RC contributions deal efficiently with arbitrary RoI shapes, which hinders the accuracy of background definition and rate control. This manuscript presents a novel coding system based on prediction with a novel RC algorithm for RoI coding that allows arbitrary RoIs shapes. Compared to other methods of the state of the art, our proposed algorithm significantly improves upon their RC accuracy, while reducing the compressed data rate for the RoI by 30%. Furthermore, it offers higher quality in the reconstructed background areas, which has been linked to better clinical performance by expert pathologists. Finally, the proposed method also allows lossless compression of both the RoI and the background, producing data volumes 14% lower than coding techniques included in DICOM, such as HEVC and JPEG-LS.

近年来,利用数字病理图像进行医学诊断的在线协作工具的需求不断增加。由于病理图像体积庞大,能准确控制压缩文件大小的速率控制(RC)技术对于满足现有带宽限制同时最大限度地提高检索图像质量至关重要。最近,一些针对病理成像的感兴趣区(RoI)编码的速率控制技术已经问世。这些方法对感兴趣区(RoI)进行无损编码,对背景进行有损编码,并侧重于为背景区域提供较高的 RC 精确度。然而,这些 RC 解决方案都不能有效处理任意形状的 RoI,这就妨碍了背景定义和速率控制的准确性。本手稿介绍了一种基于预测的新型编码系统,该系统采用新型 RC 算法进行 RoI 编码,允许任意形状的 RoI。与现有的其他方法相比,我们提出的算法大大提高了其 RC 精确度,同时将 RoI 的压缩数据率降低了 30%。此外,它还能提供更高质量的重建背景区域,这与病理专家更好的临床表现息息相关。最后,所提出的方法还能对RoI和背景进行无损压缩,产生的数据量比DICOM中的编码技术(如HEVC和JPEG-LS)低14%。
{"title":"Prediction-based coding with rate control for lossless region of interest in pathology imaging","authors":"Joan Bartrina-Rapesta ,&nbsp;Miguel Hernández-Cabronero ,&nbsp;Victor Sanchez ,&nbsp;Joan Serra-Sagristà ,&nbsp;Pouya Jamshidi ,&nbsp;J. Castellani","doi":"10.1016/j.image.2023.117087","DOIUrl":"10.1016/j.image.2023.117087","url":null,"abstract":"<div><p>Online collaborative tools for medical diagnosis produced from digital pathology images have experimented an increase in demand in recent years. Due to the large sizes of pathology images, rate control (RC) techniques that allow an accurate control of compressed file sizes are critical to meet existing bandwidth restrictions while maximizing retrieved image quality. Recently, some RC contributions to Region of Interest (RoI) coding for pathology imaging have been presented. These encode the RoI without loss and the background with some loss, and focus on providing high RC accuracy for the background area. However, none of these RC contributions deal efficiently with arbitrary RoI shapes, which hinders the accuracy of background definition and rate control. This manuscript presents a novel coding system based on prediction with a novel RC algorithm for RoI coding that allows arbitrary RoIs shapes. Compared to other methods of the state of the art, our proposed algorithm significantly improves upon their RC accuracy, while reducing the compressed data rate for the RoI by 30%. Furthermore, it offers higher quality in the reconstructed background areas, which has been linked to better clinical performance by expert pathologists. Finally, the proposed method also allows lossless compression of both the RoI and the background, producing data volumes 14% lower than coding techniques included in DICOM, such as HEVC and JPEG-LS.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"123 ","pages":"Article 117087"},"PeriodicalIF":3.5,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0923596523001698/pdfft?md5=50ed387eb780e2fb3882b0d9944d5133&pid=1-s2.0-S0923596523001698-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139558222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dilated MultiRes Visual Attention U-Net for historical document image binarization 用于历史文献图像二值化的稀释多重影视觉注意力 U-Net
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-15 DOI: 10.1016/j.image.2024.117102
Nikolaos Detsikas, Nikolaos Mitianoudis, Nikolaos Papamarkos

The task of binarization of historical document images has been in the forefront of image processing research, during the digital transition of libraries. The process of storing and transcribing valuable historical printed or handwritten material can salvage world cultural heritage and make it available online without physical attendance. The task of binarization can be viewed as a pre-processing step that attempts to separate the printed/handwritten characters in the image from possible noise and stains, which will assist in the Optical Character Recognition (OCR) process. Many approaches have been proposed before, including deep learning based approaches. In this article, we propose a U-Net style deep learning architecture that incorporates many other developments of deep learning, including residual connections, multi-resolution connections, visual attention blocks and dilated convolution blocks for upsampling. The novelties in the proposed DMVAnet lie in the use of these elements in combination in a novel U-Net style architecture and the application of DMVAnet in image binarization for the first time. In addition, the proposed DMVAnet is a very computationally lightweight network that performs very close or even better than the state-of-the-art approaches with a fraction of the network size and parameters. Finally, it can be used on platforms with restricted processing power and system resources, such as mobile devices and through scaling can result in inference times that allow for real-time applications.

在图书馆数字化转型期间,历史文献图像的二值化任务一直处于图像处理研究的前沿。对珍贵的历史印刷或手写资料进行存储和转录的过程可以抢救世界文化遗产,并使其无需实物即可在线查阅。二值化任务可被视为一个预处理步骤,它试图将图像中的印刷/手写字符与可能存在的噪音和污点分离开来,这将有助于光学字符识别(OCR)过程。之前已经提出了许多方法,包括基于深度学习的方法。在本文中,我们提出了一种 U-Net 风格的深度学习架构,它融合了深度学习的许多其他发展,包括残差连接、多分辨率连接、视觉注意力块和用于上采样的扩张卷积块。拟议的 DMVAnet 的新颖之处在于将这些元素结合到一个新颖的 U-Net 式架构中,并首次将 DMVAnet 应用于图像二值化。此外,所提出的 DMVAnet 是一种计算量非常小的网络,其性能非常接近甚至优于最先进的方法,而网络规模和参数仅为后者的一小部分。最后,它可以在处理能力和系统资源有限的平台上使用,如移动设备,并且通过缩放可以实现实时应用的推理时间。
{"title":"A Dilated MultiRes Visual Attention U-Net for historical document image binarization","authors":"Nikolaos Detsikas,&nbsp;Nikolaos Mitianoudis,&nbsp;Nikolaos Papamarkos","doi":"10.1016/j.image.2024.117102","DOIUrl":"10.1016/j.image.2024.117102","url":null,"abstract":"<div><p><span><span>The task of binarization of historical </span>document images<span> has been in the forefront of image processing research, during the digital transition of libraries. The process of storing and transcribing valuable historical printed or handwritten material can salvage world cultural heritage and make it available online without physical attendance. The task of binarization can be viewed as a pre-processing step that attempts to separate the printed/handwritten characters in the image from possible noise and stains, which will assist in the </span></span>Optical Character Recognition<span><span> (OCR) process. Many approaches have been proposed before, including deep learning based approaches. In this article, we propose a U-Net style deep learning architecture that incorporates many other developments of deep learning, including residual connections, multi-resolution connections, visual attention blocks and dilated convolution blocks for upsampling. The novelties in the proposed DMVAnet lie in the use of these elements in combination in a novel U-Net style architecture and the application of DMVAnet in image binarization for the first time. In addition, the proposed DMVAnet is a very computationally lightweight network that performs very close or even better than the state-of-the-art approaches with a fraction of the network size and parameters. Finally, it can be used on platforms with restricted processing power and system resources, such as </span>mobile devices and through scaling can result in inference times that allow for real-time applications.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"122 ","pages":"Article 117102"},"PeriodicalIF":3.5,"publicationDate":"2024-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139470349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concept drift challenge in multimedia anomaly detection: A case study with facial datasets 多媒体异常检测中的概念漂移挑战:面部数据集案例研究
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-08 DOI: 10.1016/j.image.2024.117100
Pratibha Kumari , Priyankar Choudhary , Vinit Kujur , Pradeep K. Atrey , Mukesh Saini

Anomaly detection in multimedia datasets is a widely studied area. Yet, the concept drift challenge in data has been ignored or poorly handled by the majority of the anomaly detection frameworks. The state-of-the-art approaches assume that the data distribution at training and deployment time will be the same. However, due to various real-life environmental factors, the data may encounter drift in its distribution or can drift from one class to another in the late future. Thus, a one-time trained model might not perform adequately. In this paper, we systematically investigate the effect of concept drift on various detection models and propose a modified Adaptive Gaussian Mixture Model (AGMM) based framework for anomaly detection in multimedia data. In contrast to the baseline AGMM, the proposed extension of AGMM remembers the past for a longer period in order to handle the drift better. Extensive experimental analysis shows that the proposed model better handles the drift in data as compared with the baseline AGMM. Further, to facilitate research and comparison with the proposed framework, we contribute three multimedia datasets constituting faces as samples. The face samples of individuals correspond to the age difference of more than ten years to incorporate a longer temporal context.

多媒体数据集的异常检测是一个被广泛研究的领域。然而,大多数异常检测框架都忽略了数据中的概念漂移挑战,或者处理不当。最先进的方法假设训练和部署时的数据分布是相同的。然而,由于现实生活中的各种环境因素,数据的分布可能会发生漂移,或者在后期从一个类别漂移到另一个类别。因此,一次性训练的模型可能无法充分发挥作用。在本文中,我们系统地研究了概念漂移对各种检测模型的影响,并提出了一种基于自适应高斯混杂模型(AGMM)的改进框架,用于多媒体数据的异常检测。与基线 AGMM 不同的是,为了更好地处理概念漂移,我们提出的 AGMM 扩展模型将过去的概念记忆更长的时间。广泛的实验分析表明,与基线 AGMM 相比,提议的模型能更好地处理数据漂移。此外,为了便于研究和比较所提出的框架,我们提供了三个以人脸为样本的多媒体数据集。这些人脸样本的年龄相差十多岁,因此具有更长的时间背景。
{"title":"Concept drift challenge in multimedia anomaly detection: A case study with facial datasets","authors":"Pratibha Kumari ,&nbsp;Priyankar Choudhary ,&nbsp;Vinit Kujur ,&nbsp;Pradeep K. Atrey ,&nbsp;Mukesh Saini","doi":"10.1016/j.image.2024.117100","DOIUrl":"10.1016/j.image.2024.117100","url":null,"abstract":"<div><p>Anomaly detection<span> in multimedia datasets is a widely studied area. Yet, the concept drift challenge in data has been ignored or poorly handled by the majority of the anomaly detection frameworks. The state-of-the-art approaches assume that the data distribution at training and deployment time will be the same. However, due to various real-life environmental factors, the data may encounter drift in its distribution or can drift from one class to another in the late future. Thus, a one-time trained model might not perform adequately. In this paper, we systematically investigate the effect of concept drift on various detection models and propose a modified Adaptive Gaussian Mixture Model (AGMM) based framework for anomaly detection in multimedia data. In contrast to the baseline AGMM, the proposed extension of AGMM remembers the past for a longer period in order to handle the drift better. Extensive experimental analysis shows that the proposed model better handles the drift in data as compared with the baseline AGMM. Further, to facilitate research and comparison with the proposed framework, we contribute three multimedia datasets constituting faces as samples. The face samples of individuals correspond to the age difference of more than ten years to incorporate a longer temporal context.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"123 ","pages":"Article 117100"},"PeriodicalIF":3.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139423693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1