首页 > 最新文献

Journal of Electronic Imaging最新文献

英文 中文
Frequency domain-based reversible adversarial attacks for privacy protection in Internet of Things 基于频域的可逆对抗攻击促进物联网中的隐私保护
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043049
Yang Lu, Tianfeng Ma, Zilong Pang, Xiuli Chai, Zhen Chen, Zongwei Tang
Images shared on social networks often contain a large amount of private information. Bad actors can use deep learning technology to analyze private information from these images, thus causing user privacy leakage. To protect the privacy of users, reversible adversarial examples (RAEs) are proposed, and they may keep malignant models from accessing the image data while ensuring that the authorized model can recover the source data. However, existing RAEs have shortcomings in imperceptibility and attack capability. We utilize frequency domain information to generate RAEs. To improve the attack capability, the RAEs are generated by discarding the discriminant information of the original class and adding specific perturbation information. For imperceptibility, we propose to embed the perturbation in the wavelet domain of the image. Also, we design low-frequency constraints to distribute the perturbations in the high-frequency region and to ensure the similarity between the original examples and RAEs. In addition, the momentum pre-processing method is proposed to ensure that the direction of the gradient is consistent in each iteration by pre-converging the gradient before the formal iteration, thus accelerating the convergence speed of the gradient, which can be applied to the generation process of RAEs to speed up the generation of RAEs. Experimental results on the ImageNet, Caltech-256, and CIFAR-10 datasets show that the proposed method exhibits the best attack capability and visual quality compared with existing RAE generation schemes. The attack success rate and peak signal-to-noise ratio exceed 99% and 42 dB, respectively. In addition, the generated RAEs demonstrate good transferability and robustness.
社交网络上共享的图片通常包含大量隐私信息。坏人可以利用深度学习技术分析这些图像中的隐私信息,从而造成用户隐私泄露。为了保护用户隐私,人们提出了可逆对抗范例(RAE),它们可以阻止恶意模型访问图像数据,同时确保授权模型可以恢复源数据。然而,现有的 RAE 在不可感知性和攻击能力方面存在缺陷。我们利用频域信息生成 RAE。为了提高攻击能力,我们在生成 RAE 时摒弃了原始类别的判别信息,并添加了特定的扰动信息。为了提高不可感知性,我们建议将扰动嵌入图像的小波域中。同时,我们还设计了低频约束,以将扰动分布在高频区域,并确保原始示例与 RAE 之间的相似性。此外,我们还提出了动量预处理方法,通过在正式迭代之前对梯度进行预收敛,确保每次迭代的梯度方向一致,从而加快梯度的收敛速度,该方法可应用于 RAE 的生成过程,加快 RAE 的生成速度。在ImageNet、Caltech-256和CIFAR-10数据集上的实验结果表明,与现有的RAE生成方案相比,所提出的方法具有最佳的攻击能力和视觉质量。攻击成功率和峰值信噪比分别超过 99% 和 42 dB。此外,生成的 RAE 还具有良好的可移植性和鲁棒性。
{"title":"Frequency domain-based reversible adversarial attacks for privacy protection in Internet of Things","authors":"Yang Lu, Tianfeng Ma, Zilong Pang, Xiuli Chai, Zhen Chen, Zongwei Tang","doi":"10.1117/1.jei.33.4.043049","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043049","url":null,"abstract":"Images shared on social networks often contain a large amount of private information. Bad actors can use deep learning technology to analyze private information from these images, thus causing user privacy leakage. To protect the privacy of users, reversible adversarial examples (RAEs) are proposed, and they may keep malignant models from accessing the image data while ensuring that the authorized model can recover the source data. However, existing RAEs have shortcomings in imperceptibility and attack capability. We utilize frequency domain information to generate RAEs. To improve the attack capability, the RAEs are generated by discarding the discriminant information of the original class and adding specific perturbation information. For imperceptibility, we propose to embed the perturbation in the wavelet domain of the image. Also, we design low-frequency constraints to distribute the perturbations in the high-frequency region and to ensure the similarity between the original examples and RAEs. In addition, the momentum pre-processing method is proposed to ensure that the direction of the gradient is consistent in each iteration by pre-converging the gradient before the formal iteration, thus accelerating the convergence speed of the gradient, which can be applied to the generation process of RAEs to speed up the generation of RAEs. Experimental results on the ImageNet, Caltech-256, and CIFAR-10 datasets show that the proposed method exhibits the best attack capability and visual quality compared with existing RAE generation schemes. The attack success rate and peak signal-to-noise ratio exceed 99% and 42 dB, respectively. In addition, the generated RAEs demonstrate good transferability and robustness.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-world image denoising via efficient diffusion model with controllable noise generation 通过可控噪声生成的高效扩散模型实现真实世界图像去噪
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-01 DOI: 10.1117/1.jei.33.4.043003
Cheng Yang, Cong Wang, Lijing Liang, Zhixun Su
Real-world image denoising is a critical task in image processing, aiming to restore clean images from their noisy counterparts captured in natural environments. While diffusion models have demonstrated remarkable success in image generation, surpassing traditional generative models, their application to image denoising has been limited due to challenges in controlling noise generation effectively. We present a general denoising method inspired by diffusion models. Specifically, our approach employs a diffusion process with linear interpolation, enabling control of noise generation. By interpolating the intermediate noisy image between the original clean image and the corresponding real-world noisy one, our model is able to achieve controllable noise generation. Moreover, we introduce two sampling algorithms for this diffusion model: a straightforward procedure aligned with the diffusion process and an enhanced version that addresses the shortcomings of the former. Experimental results demonstrate that our proposed method, utilizing simple convolutional neural networks such as UNet, achieves denoising performance comparable to that of the transformer architecture.
真实世界的图像去噪是图像处理中的一项关键任务,其目的是从自然环境中捕获的噪声图像中还原出干净的图像。虽然扩散模型在图像生成方面取得了显著的成功,超越了传统的生成模型,但由于难以有效控制噪声的产生,其在图像去噪方面的应用一直受到限制。我们提出了一种受扩散模型启发的通用去噪方法。具体来说,我们的方法采用了线性插值的扩散过程,从而实现了对噪声生成的控制。通过对原始干净图像和相应的真实世界噪声图像之间的中间噪声图像进行插值,我们的模型能够实现可控噪声生成。此外,我们还为这一扩散模型引入了两种采样算法:一种是与扩散过程一致的直接程序,另一种是针对前者缺点的增强版本。实验结果表明,我们提出的方法利用简单的卷积神经网络(如 UNet)实现了与变压器架构相当的去噪性能。
{"title":"Real-world image denoising via efficient diffusion model with controllable noise generation","authors":"Cheng Yang, Cong Wang, Lijing Liang, Zhixun Su","doi":"10.1117/1.jei.33.4.043003","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043003","url":null,"abstract":"Real-world image denoising is a critical task in image processing, aiming to restore clean images from their noisy counterparts captured in natural environments. While diffusion models have demonstrated remarkable success in image generation, surpassing traditional generative models, their application to image denoising has been limited due to challenges in controlling noise generation effectively. We present a general denoising method inspired by diffusion models. Specifically, our approach employs a diffusion process with linear interpolation, enabling control of noise generation. By interpolating the intermediate noisy image between the original clean image and the corresponding real-world noisy one, our model is able to achieve controllable noise generation. Moreover, we introduce two sampling algorithms for this diffusion model: a straightforward procedure aligned with the diffusion process and an enhanced version that addresses the shortcomings of the former. Experimental results demonstrate that our proposed method, utilizing simple convolutional neural networks such as UNet, achieves denoising performance comparable to that of the transformer architecture.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale point pair normal encoding for local feature description and 3D object recognition 用于局部特征描述和三维物体识别的多尺度点对法线编码
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-01 DOI: 10.1117/1.jei.33.4.043005
Chu’ai Zhang, Yating Wang, Qiao Wu, Jiangbin Zheng, Jiaqi Yang, Siwen Quan, Yanning Zhang
Recognizing three-dimensional (3D) objects based on local feature descriptors is a highly challenging task. Existing 3D local feature descriptors rely on single-scale surface normals, which are susceptible to noise and outliers, significantly compromising their effectiveness and robustness. A multi-scale point pair normal encoding (M-POE) method for 3D object recognition is proposed. First, we introduce the M-POE descriptor, which encodes voxelized features with multi-scale normals to describe local surfaces, exhibiting strong distinctiveness and robustness against various interferences. Second, we present guided sample consensus in second-order graphs (GSAC-SOG), an extension of RANSAC that incorporates geometric constraints and reduces sampling randomness, enabling accurate estimation of the object’s six-degree-of-freedom (6-DOF) pose. Finally, a 3D object recognition method based on the M-POE descriptor is proposed. The proposed method is evaluated on five standard datasets with state-of-the-art comparisons. The results demonstrate that (1) M-POE is robust, discriminative, and efficient; (2) GSAC-SOG is robust to outliers; (3) the proposed 3D object recognition method achieves high accuracy and robustness against clutter and occlusion, with recognition rates of 99.45%, 94.21%, and 97.88% on the U3OR, Queen, and CFV datasets, respectively.
根据局部特征描述符识别三维(3D)物体是一项极具挑战性的任务。现有的三维局部特征描述符依赖于单尺度表面法线,容易受到噪声和异常值的影响,大大降低了其有效性和鲁棒性。本文提出了一种用于三维物体识别的多尺度点对法线编码(M-POE)方法。首先,我们介绍了 M-POE 描述符,该描述符用多尺度法线编码体素化特征来描述局部表面,表现出很强的独特性和对各种干扰的鲁棒性。其次,我们介绍了二阶图中的引导采样共识(GSAC-SOG),它是 RANSAC 的扩展,结合了几何约束并减少了采样随机性,从而能够准确估计物体的六自由度(6-DOF)姿态。最后,提出了一种基于 M-POE 描述符的三维物体识别方法。所提出的方法在五个标准数据集上进行了评估,并与最先进的方法进行了比较。结果表明:(1) M-POE 具有鲁棒性、鉴别性和高效性;(2) GSAC-SOG 对异常值具有鲁棒性;(3) 所提出的三维物体识别方法具有较高的准确性和鲁棒性,能够抵御杂波和遮挡,在 U3OR、Queen 和 CFV 数据集上的识别率分别为 99.45%、94.21% 和 97.88%。
{"title":"Multi-scale point pair normal encoding for local feature description and 3D object recognition","authors":"Chu’ai Zhang, Yating Wang, Qiao Wu, Jiangbin Zheng, Jiaqi Yang, Siwen Quan, Yanning Zhang","doi":"10.1117/1.jei.33.4.043005","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043005","url":null,"abstract":"Recognizing three-dimensional (3D) objects based on local feature descriptors is a highly challenging task. Existing 3D local feature descriptors rely on single-scale surface normals, which are susceptible to noise and outliers, significantly compromising their effectiveness and robustness. A multi-scale point pair normal encoding (M-POE) method for 3D object recognition is proposed. First, we introduce the M-POE descriptor, which encodes voxelized features with multi-scale normals to describe local surfaces, exhibiting strong distinctiveness and robustness against various interferences. Second, we present guided sample consensus in second-order graphs (GSAC-SOG), an extension of RANSAC that incorporates geometric constraints and reduces sampling randomness, enabling accurate estimation of the object’s six-degree-of-freedom (6-DOF) pose. Finally, a 3D object recognition method based on the M-POE descriptor is proposed. The proposed method is evaluated on five standard datasets with state-of-the-art comparisons. The results demonstrate that (1) M-POE is robust, discriminative, and efficient; (2) GSAC-SOG is robust to outliers; (3) the proposed 3D object recognition method achieves high accuracy and robustness against clutter and occlusion, with recognition rates of 99.45%, 94.21%, and 97.88% on the U3OR, Queen, and CFV datasets, respectively.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PGDIG-YOLO: a lightweight method for airport runway foreign object detection PGDIG-YOLO:机场跑道异物检测的轻量级方法
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-01 DOI: 10.1117/1.jei.33.4.043014
Liushuai Zheng, Xinyu Chen, Liuchuang Zheng
Aiming at the frequent misdetection and omission in the detection process of airport runway foreign object debris (FOD) and the difficulty of deploying the detection algorithm to embedded devices, we propose a lightweight FOD detection method called PGDIG-YOLO based on the improvement of YOLOv8n. First, a detection layer for detecting small-size objects is added and a large target detection layer is deleted to enhance the network’s ability to sense small-sized objects. Second, a dilation-wise residual module is introduced in the segmentation domain, and the C2FD module is proposed, which effectively solves the problem of misdetection and missed detection of FOD on airport runways. Third, the inner-WMPDIoUv3 is designed to replace the CIoU as a loss function to improve the regression accuracy of the detection frame. Finally, the model is pruned using the Group_sl method, which reduces the amount of computation, compresses the model size, and improves the model inference speed. The experimental results on the homemade dataset FOD-Z show that, compared with the benchmark model YOLOv8n, the model volume and computation of the PGDIG-YOLO network are only 6.6% and 44.4% of the original network, and the accuracy and recall are improved by 1.1% and 3.8%, respectively. Meanwhile, the mAP@0.5, mAP@0.75, and mAP@0.5:0.95 are increased to 99.1%, 93.7%, and 85.6%, respectively. Deploying PGDIG-YOLO to the NVIDIA Jetson Xavier NX 16 GB embedded device, the detection speed reaches 42 FPS, which can realize real-time FOD detection.
针对机场跑道异物碎片(FOD)检测过程中经常出现的误检和漏检现象,以及将检测算法部署到嵌入式设备上的困难,我们在改进 YOLOv8n 的基础上,提出了一种名为 PGDIG-YOLO 的轻量级 FOD 检测方法。首先,增加了检测小尺寸物体的检测层,删除了大目标检测层,以增强网络对小尺寸物体的感知能力。其次,在分割域引入了扩张残差模块,并提出了 C2FD 模块,有效解决了机场跑道上 FOD 的误检和漏检问题。第三,设计了 inner-WMPDIoUv3 代替 CIoU 作为损失函数,提高了检测框的回归精度。最后,使用 Group_sl 方法对模型进行剪枝,从而减少计算量,压缩模型大小,提高模型推理速度。在自制数据集 FOD-Z 上的实验结果表明,与基准模型 YOLOv8n 相比,PGDIG-YOLO 网络的模型体积和计算量仅为原始网络的 6.6% 和 44.4%,准确率和召回率分别提高了 1.1% 和 3.8%。同时,mAP@0.5、mAP@0.75 和 mAP@0.5:0.95 分别提高到 99.1%、93.7% 和 85.6%。将 PGDIG-YOLO 部署到 NVIDIA Jetson Xavier NX 16 GB 嵌入式设备上,检测速度达到 42 FPS,可实现实时 FOD 检测。
{"title":"PGDIG-YOLO: a lightweight method for airport runway foreign object detection","authors":"Liushuai Zheng, Xinyu Chen, Liuchuang Zheng","doi":"10.1117/1.jei.33.4.043014","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043014","url":null,"abstract":"Aiming at the frequent misdetection and omission in the detection process of airport runway foreign object debris (FOD) and the difficulty of deploying the detection algorithm to embedded devices, we propose a lightweight FOD detection method called PGDIG-YOLO based on the improvement of YOLOv8n. First, a detection layer for detecting small-size objects is added and a large target detection layer is deleted to enhance the network’s ability to sense small-sized objects. Second, a dilation-wise residual module is introduced in the segmentation domain, and the C2FD module is proposed, which effectively solves the problem of misdetection and missed detection of FOD on airport runways. Third, the inner-WMPDIoUv3 is designed to replace the CIoU as a loss function to improve the regression accuracy of the detection frame. Finally, the model is pruned using the Group_sl method, which reduces the amount of computation, compresses the model size, and improves the model inference speed. The experimental results on the homemade dataset FOD-Z show that, compared with the benchmark model YOLOv8n, the model volume and computation of the PGDIG-YOLO network are only 6.6% and 44.4% of the original network, and the accuracy and recall are improved by 1.1% and 3.8%, respectively. Meanwhile, the mAP@0.5, mAP@0.75, and mAP@0.5:0.95 are increased to 99.1%, 93.7%, and 85.6%, respectively. Deploying PGDIG-YOLO to the NVIDIA Jetson Xavier NX 16 GB embedded device, the detection speed reaches 42 FPS, which can realize real-time FOD detection.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141584768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Copy-move forgery detection algorithm based on binarized statistical image features and principal component analysis 基于二值化统计图像特征和主成分分析的复制移动伪造检测算法
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-01 DOI: 10.1117/1.jei.33.4.043004
Azzedine Bensaad, Khaled Loukhaoukha, Said Sadoudi, Aissa Snani
The most common form of image forgery is copy-move, which arises when an image region is duplicated and pasted onto another region of the same image. An effective algorithm for copy-move forgery detection based on binarized statistical image features (BSIF) and principal component analysis (PCA) is presented. Initially, the suspicious image is converted to grayscale and is subsequently partitioned into overlapping blocks. Feature vectors are extracted from these blocks using BSIF, followed by dimensionality reduction using PCA. Next, as a precursor to the matching step, the feature vectors are sorted lexicographically. Additionally, a morphological opening operation is applied to eliminate outliers. This algorithm offers not just forgery detection but also the ability to localize and identify duplicated regions. The proposed algorithm was assessed using three datasets: CoMoFoD, GRIP, and UNIPA. The experimental results show that this algorithm is fast and has high accuracy for forgery detection and localization. Moreover, it has high robustness under various postprocessing operations, such as brightness, contrast adjustments, and blurring. Furthermore, the proposed algorithm outperforms some recent approaches in overall performance.
图像伪造最常见的形式是复制移动,即把一个图像区域复制并粘贴到同一图像的另一个区域上。本文提出了一种基于二值化统计图像特征(BSIF)和主成分分析(PCA)的复制移动伪造检测有效算法。首先,将可疑图像转换为灰度图像,然后将其分割为重叠块。使用 BSIF 从这些块中提取特征向量,然后使用 PCA 进行降维。接下来,作为匹配步骤的前奏,对特征向量进行词法排序。此外,还采用了形态学开放操作来消除异常值。该算法不仅能进行伪造检测,还能定位和识别重复区域。我们使用三个数据集对所提出的算法进行了评估:CoMoFoD、GRIP 和 UNIPA。实验结果表明,该算法在伪造检测和定位方面速度快、准确率高。此外,该算法在亮度、对比度调整和模糊等各种后处理操作下都具有很高的鲁棒性。此外,所提出的算法在整体性能上优于最近的一些方法。
{"title":"Copy-move forgery detection algorithm based on binarized statistical image features and principal component analysis","authors":"Azzedine Bensaad, Khaled Loukhaoukha, Said Sadoudi, Aissa Snani","doi":"10.1117/1.jei.33.4.043004","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043004","url":null,"abstract":"The most common form of image forgery is copy-move, which arises when an image region is duplicated and pasted onto another region of the same image. An effective algorithm for copy-move forgery detection based on binarized statistical image features (BSIF) and principal component analysis (PCA) is presented. Initially, the suspicious image is converted to grayscale and is subsequently partitioned into overlapping blocks. Feature vectors are extracted from these blocks using BSIF, followed by dimensionality reduction using PCA. Next, as a precursor to the matching step, the feature vectors are sorted lexicographically. Additionally, a morphological opening operation is applied to eliminate outliers. This algorithm offers not just forgery detection but also the ability to localize and identify duplicated regions. The proposed algorithm was assessed using three datasets: CoMoFoD, GRIP, and UNIPA. The experimental results show that this algorithm is fast and has high accuracy for forgery detection and localization. Moreover, it has high robustness under various postprocessing operations, such as brightness, contrast adjustments, and blurring. Furthermore, the proposed algorithm outperforms some recent approaches in overall performance.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep unsupervised nonconvex optimization for edge-preserving image smoothing 用于边缘保护图像平滑的深度无监督非凸优化技术
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-01 DOI: 10.1117/1.jei.33.4.043001
Yiwen Xiong, Yang Yang, Lanling Zeng, Xinyu Wang, Zhigeng Pan, Lei Jiang
Edge-preserving image smoothing plays a vital role in the field of computational imaging. It is a valuable technique that has applications in various tasks. However, different tasks have specific requirements for edge preservation. Existing filters do not take into account the task-dependent smoothing behavior, resulting in visually distracting artifacts. We propose a flexible edge-preserving image filter based on a nonconvex Welsch penalty. Compared with the convex models, our model can better handle complex data and capture nonlinear relationships, thus providing better results. We combine deep unsupervised learning and graduated nonconvexity to solve our nonconvex objective function, where the main network structure is designed as a Swin transformer complemented with the locally enhanced feed-forward network. Experimental results show that the proposed method achieves excellent performance in various applications, including image smoothing, high dynamic range tone mapping, detail enhancement, and edge extraction.
边缘保留图像平滑技术在计算成像领域发挥着至关重要的作用。它是一种有价值的技术,在各种任务中都有应用。然而,不同的任务对边缘保留有特定的要求。现有的滤波器没有考虑到与任务相关的平滑行为,从而导致视觉干扰的伪影。我们提出了一种基于非凸 Welsch 惩罚的灵活边缘保留图像滤波器。与凸模型相比,我们的模型能更好地处理复杂数据并捕捉非线性关系,从而提供更好的结果。我们结合了深度无监督学习和渐进非凸性来求解我们的非凸目标函数,其中主网络结构设计为斯温变换器,并辅以局部增强前馈网络。实验结果表明,所提出的方法在图像平滑、高动态范围色调映射、细节增强和边缘提取等各种应用中都取得了优异的性能。
{"title":"Deep unsupervised nonconvex optimization for edge-preserving image smoothing","authors":"Yiwen Xiong, Yang Yang, Lanling Zeng, Xinyu Wang, Zhigeng Pan, Lei Jiang","doi":"10.1117/1.jei.33.4.043001","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043001","url":null,"abstract":"Edge-preserving image smoothing plays a vital role in the field of computational imaging. It is a valuable technique that has applications in various tasks. However, different tasks have specific requirements for edge preservation. Existing filters do not take into account the task-dependent smoothing behavior, resulting in visually distracting artifacts. We propose a flexible edge-preserving image filter based on a nonconvex Welsch penalty. Compared with the convex models, our model can better handle complex data and capture nonlinear relationships, thus providing better results. We combine deep unsupervised learning and graduated nonconvexity to solve our nonconvex objective function, where the main network structure is designed as a Swin transformer complemented with the locally enhanced feed-forward network. Experimental results show that the proposed method achieves excellent performance in various applications, including image smoothing, high dynamic range tone mapping, detail enhancement, and edge extraction.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141530025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ResRetinaFace: an efficient face detection network based on RetinaFace and residual structure ResRetinaFace:基于 RetinaFace 和残差结构的高效人脸检测网络
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-01 DOI: 10.1117/1.jei.33.4.043012
Xuanyu Liu, Shuliang Zhang, Junjie Hu, Peiyu Mao
The detection of multiple faces in unconstrained environment in deep learning suffers from insufficient detection accuracy and inefficiency; at the same time, the detection of blurred, occluded, and very small faces is even more unsatisfactory. The detection of blurred, occluded, and very small faces in multiple face detection in unconstrained environment is a hard problem in face detection nowadays. It is difficult to balance the detection accuracy and real-time efficiency in face detection with the improved RetinaFace chosen in this study. Therefore, in order to improve the efficiency of detecting blurred, occluded, and very small faces among multiple faces in unconstrained environments, we introduce deformable convolution, feature pyramid networks (FPN), and coordinate attention (CA) attention mechanism based on RetinaFace algorithm. Deformable convolution can be dynamically adjusted according to the shape and deformation of the recognized object and is no longer limited to a fixed-size square receptive field to improve the image feature extraction capability of the convolutional layer. FPN enhances the feature semantic information of the lower layers with a small increase in computational effort and improves the robustness of the detection algorithm to detect targets of different sizes. CA is a novel, lightweight, and efficient attention mechanism module for improving model performance, which can be easily integrated into mobile networks to improve accuracy with little additional computational overhead. The improved ResRetinaFace algorithm does not increase the computational overhead too much while improving the recognition accuracy, and it can better combine the characteristics of multiple postures and deformations of faces in complex scenes, adapt to the deformation state of faces’ postures, and provide more effective features for face detection, so as to pay better attention to the detection target and enhance the network characterization ability. Meanwhile, the improved algorithm combines the feature pyramid with the context module, which improves the detection effect in the case of blurred, occluded, and very small faces. The experimental outcomes demonstrate that, in contrast to the method before enhancement, the accuracy rates for easy, medium, and hard classification scenarios on the WIDER FACE dataset, utilizing the ResNet50 backbone network, are 94.83%, 93.28%, and 84.99%, respectively. Accompanied by a frames-per-second rate of 7.704, this meets the precision and real-time criteria for face measurement tasks. Validation on the WIDER FACE dataset further affirms that ResRetinaFace consistently achieves reliable face detection while maintaining high detection efficiency.
在深度学习中,无约束环境下的多人脸检测存在检测精度不够、效率不高的问题,同时,对模糊、遮挡和极小人脸的检测效果更不理想。在无约束环境下的多人脸检测中,模糊、遮挡和极小人脸的检测是目前人脸检测中的一个难题。本研究选择的改进型 RetinaFace 很难兼顾人脸检测的检测精度和实时效率。因此,为了提高在无约束环境中检测多张人脸中的模糊、遮挡和极小人脸的效率,我们在 RetinaFace 算法的基础上引入了可变形卷积、特征金字塔网络(FPN)和协调注意(CA)注意机制。可变形卷积可根据识别对象的形状和变形进行动态调整,不再局限于固定大小的正方形感受野,从而提高卷积层的图像特征提取能力。FPN 只需增加少量计算量,就能增强下层的特征语义信息,并提高检测算法的鲁棒性,以检测不同大小的目标。CA 是一种新颖、轻量级和高效的注意力机制模块,用于提高模型性能,可轻松集成到移动网络中,在几乎不增加额外计算开销的情况下提高精确度。改进后的 ResRetinaFace 算法在提高识别准确率的同时,并没有增加过多的计算开销,而且能更好地结合复杂场景中人脸的多姿态、多变形的特点,适应人脸姿态的变形状态,为人脸检测提供更有效的特征,从而更好地关注检测目标,提高网络表征能力。同时,改进后的算法将特征金字塔与上下文模块相结合,提高了模糊、遮挡和极小人脸情况下的检测效果。实验结果表明,与改进前的方法相比,利用 ResNet50 骨干网络对 WIDER FACE 数据集进行易、中、难分类的准确率分别为 94.83%、93.28% 和 84.99%。在每秒 7.704 帧的帧率支持下,达到了人脸测量任务的精度和实时性标准。在 WIDER FACE 数据集上的验证进一步证实,ResRetinaFace 在保持高检测效率的同时,还能持续实现可靠的人脸检测。
{"title":"ResRetinaFace: an efficient face detection network based on RetinaFace and residual structure","authors":"Xuanyu Liu, Shuliang Zhang, Junjie Hu, Peiyu Mao","doi":"10.1117/1.jei.33.4.043012","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043012","url":null,"abstract":"The detection of multiple faces in unconstrained environment in deep learning suffers from insufficient detection accuracy and inefficiency; at the same time, the detection of blurred, occluded, and very small faces is even more unsatisfactory. The detection of blurred, occluded, and very small faces in multiple face detection in unconstrained environment is a hard problem in face detection nowadays. It is difficult to balance the detection accuracy and real-time efficiency in face detection with the improved RetinaFace chosen in this study. Therefore, in order to improve the efficiency of detecting blurred, occluded, and very small faces among multiple faces in unconstrained environments, we introduce deformable convolution, feature pyramid networks (FPN), and coordinate attention (CA) attention mechanism based on RetinaFace algorithm. Deformable convolution can be dynamically adjusted according to the shape and deformation of the recognized object and is no longer limited to a fixed-size square receptive field to improve the image feature extraction capability of the convolutional layer. FPN enhances the feature semantic information of the lower layers with a small increase in computational effort and improves the robustness of the detection algorithm to detect targets of different sizes. CA is a novel, lightweight, and efficient attention mechanism module for improving model performance, which can be easily integrated into mobile networks to improve accuracy with little additional computational overhead. The improved ResRetinaFace algorithm does not increase the computational overhead too much while improving the recognition accuracy, and it can better combine the characteristics of multiple postures and deformations of faces in complex scenes, adapt to the deformation state of faces’ postures, and provide more effective features for face detection, so as to pay better attention to the detection target and enhance the network characterization ability. Meanwhile, the improved algorithm combines the feature pyramid with the context module, which improves the detection effect in the case of blurred, occluded, and very small faces. The experimental outcomes demonstrate that, in contrast to the method before enhancement, the accuracy rates for easy, medium, and hard classification scenarios on the WIDER FACE dataset, utilizing the ResNet50 backbone network, are 94.83%, 93.28%, and 84.99%, respectively. Accompanied by a frames-per-second rate of 7.704, this meets the precision and real-time criteria for face measurement tasks. Validation on the WIDER FACE dataset further affirms that ResRetinaFace consistently achieves reliable face detection while maintaining high detection efficiency.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the deblurring method of D2Net network for infrared videos 改进红外视频 D2Net 网络的去模糊方法
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-01 DOI: 10.1117/1.jei.33.4.043013
Jia Zhang, Yanzhu Zhang, Fan Yang, Tingxue Li, Yuhai Li, He Zhao, Jixiong Pu
When facing motion and complex environmental conditions, infrared videos captured by thermal imaging devices often suffer from blurring, leading to unclear or missing details and positional information about the targets. To improve this problem, this work proposes an improved deblurring method suitable for infrared videos based on a deep learning-based deblurring network originally designed for visible light images. This method is built upon the D2Net network by introducing a spatial and channel reconstruction convolution for feature redundancy, enhancing the network’s capability for image feature learning. In terms of the encoder-decoder module, a triple attention mechanism and fast Fourier transform are introduced to further improve the network’s deblurring performance. Through ablative experiments on infrared datasets, the results demonstrate a significant improvement in deblurring performance compared to the original D2Net. Specifically, the improved network achieved a 1.42 dB increase in peak signal-to-noise ratio and a 0.02 dB increase in structural similarity compared to the original network. In summary, this paper achieves promising results in infrared video deblurring tasks, demonstrating the effectiveness of the proposed method.
在面对运动和复杂环境条件时,热成像设备拍摄的红外视频往往会出现模糊现象,导致目标的细节和位置信息不清晰或缺失。为了改善这一问题,本研究基于最初为可见光图像设计的基于深度学习的去模糊网络,提出了一种适用于红外视频的改进型去模糊方法。该方法以 D2Net 网络为基础,引入了空间和信道重构卷积以实现特征冗余,从而增强了网络的图像特征学习能力。在编码器-解码器模块方面,引入了三重关注机制和快速傅立叶变换,进一步提高了网络的去模糊性能。通过对红外数据集的消融实验,结果表明与原始 D2Net 相比,去模糊性能有了显著提高。具体来说,与原始网络相比,改进网络的峰值信噪比提高了 1.42 dB,结构相似度提高了 0.02 dB。总之,本文在红外视频去模糊任务中取得了可喜的成果,证明了所提方法的有效性。
{"title":"Improving the deblurring method of D2Net network for infrared videos","authors":"Jia Zhang, Yanzhu Zhang, Fan Yang, Tingxue Li, Yuhai Li, He Zhao, Jixiong Pu","doi":"10.1117/1.jei.33.4.043013","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043013","url":null,"abstract":"When facing motion and complex environmental conditions, infrared videos captured by thermal imaging devices often suffer from blurring, leading to unclear or missing details and positional information about the targets. To improve this problem, this work proposes an improved deblurring method suitable for infrared videos based on a deep learning-based deblurring network originally designed for visible light images. This method is built upon the D2Net network by introducing a spatial and channel reconstruction convolution for feature redundancy, enhancing the network’s capability for image feature learning. In terms of the encoder-decoder module, a triple attention mechanism and fast Fourier transform are introduced to further improve the network’s deblurring performance. Through ablative experiments on infrared datasets, the results demonstrate a significant improvement in deblurring performance compared to the original D2Net. Specifically, the improved network achieved a 1.42 dB increase in peak signal-to-noise ratio and a 0.02 dB increase in structural similarity compared to the original network. In summary, this paper achieves promising results in infrared video deblurring tasks, demonstrating the effectiveness of the proposed method.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highly compressed image encryption algorithm via fractal and semi-tensor product compressed sensing 通过分形和半张量乘积压缩传感实现高压缩图像加密算法
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-01 DOI: 10.1117/1.jei.33.4.043026
Lin Fan, Meng Li
Storage space and security concerns on multimedia images have emerged as a global issue in recent years. Image encryption algorithm via compressed sensing (CS) is an effective method for data security and reducing storage space. However, the existing CS-based image encryption still faces problems, such as weak resistance to attacks and extensive data storage. We design a high-compression image encryption algorithm that combines fractal and semi-tensor product compressed sensing. First, a measurement matrix required for CS is generated using fractal blocks combined with the semi-tensor product method, which enhances security while reducing the size of the measurement matrix. Then, the measurements obtained from the sampling are used to define the product features of their mean and standard deviation. Exclusion criteria are set, and fractal codes are obtained through matched searching. Finally, the fractal code undergoes scrambling and diffusion, providing triple-layer protection and further improving the security of the secret image. In comparison to conventional methods, our proposed method has greatly improved the compression efficiency through compressed sampling and has the advantages of better concealment and enhanced robustness. Experiments show that we substantiate the effectiveness and superior performance of our method, all while upholding image quality and security.
近年来,多媒体图像的存储空间和安全问题已成为一个全球性问题。通过压缩传感(CS)进行图像加密算法是一种有效的数据安全和减少存储空间的方法。然而,现有的基于 CS 的图像加密仍然面临着抗攻击能力弱、数据存储量大等问题。我们设计了一种结合了分形压缩传感和半张量乘积压缩传感的高压缩图像加密算法。首先,利用分形块结合半张量乘法生成 CS 所需的测量矩阵,从而在减少测量矩阵大小的同时提高安全性。然后,利用采样获得的测量值定义其平均值和标准偏差的乘积特征。设定排除标准,通过匹配搜索获得分形代码。最后,分形码经过扰码和扩散,提供三层保护,进一步提高了秘密图像的安全性。与传统方法相比,我们提出的方法通过压缩采样大大提高了压缩效率,并具有更好的隐蔽性和更强的鲁棒性。实验表明,我们的方法在保证图像质量和安全性的同时,证实了其有效性和优越性能。
{"title":"Highly compressed image encryption algorithm via fractal and semi-tensor product compressed sensing","authors":"Lin Fan, Meng Li","doi":"10.1117/1.jei.33.4.043026","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043026","url":null,"abstract":"Storage space and security concerns on multimedia images have emerged as a global issue in recent years. Image encryption algorithm via compressed sensing (CS) is an effective method for data security and reducing storage space. However, the existing CS-based image encryption still faces problems, such as weak resistance to attacks and extensive data storage. We design a high-compression image encryption algorithm that combines fractal and semi-tensor product compressed sensing. First, a measurement matrix required for CS is generated using fractal blocks combined with the semi-tensor product method, which enhances security while reducing the size of the measurement matrix. Then, the measurements obtained from the sampling are used to define the product features of their mean and standard deviation. Exclusion criteria are set, and fractal codes are obtained through matched searching. Finally, the fractal code undergoes scrambling and diffusion, providing triple-layer protection and further improving the security of the secret image. In comparison to conventional methods, our proposed method has greatly improved the compression efficiency through compressed sampling and has the advantages of better concealment and enhanced robustness. Experiments show that we substantiate the effectiveness and superior performance of our method, all while upholding image quality and security.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141771014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive reversible data hiding in encrypted images based on polynomial secret sharing and Chinese remainder theorem 基于多项式秘密共享和中国余数定理的加密图像中的渐进可逆数据隐藏
IF 1.1 4区 计算机科学 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-01 DOI: 10.1117/1.jei.33.4.043008
Chao Jiang, Minqing Zhang, Zongbao Jiang, Yongjun Kong, Fuqiang Di
In the current distributed environment, reversible data hiding in encrypted images has the disadvantages of low security and nonprogressivity. To address this problem, a homomorphic embedding algorithm is proposed based on polynomial secret sharing (PSS) and Chinese remainder theorem. First, the image owner encrypts the carrier image in streaming encryption and sends it to the data hider. Then, the data hider utilizes PSS to split the carrier image into n shares. At the same time, extra secrets after SS are embedded into the carrier shares using homomorphism. After splitting by Chinese remainder theorem, every share of the embedded data is divided into some sub-shares and then distributed to the participants. The participants that satisfy the threshold condition provide part or all of the sub-shares according to the authority of the data extractor. If each participant provides all sub-shares, the secrets and carrier image can be reconstructed completely. If each participant provides part of the sub-shares, the secrets and carrier image can be reconstructed partly. The experimental results show that the proposed scheme has progressivity, high security, and a large embedding rate (ER). Meanwhile, the ER is not affected by the carrier image.
在当前的分布式环境中,在加密图像中进行可逆数据隐藏存在安全性低和不可逆的缺点。针对这一问题,基于多项式秘密共享(PSS)和中国余数定理,提出了一种同态嵌入算法。首先,图像所有者以流式加密方式对载体图像进行加密,并将其发送给数据隐藏者。然后,数据隐藏者利用 PSS 将载波图像分成 n 份。同时,使用同态法将 SS 后的额外秘密嵌入到载波份额中。利用中国余数定理分割后,嵌入数据的每一份都会被分成若干子份,然后分配给参与者。满足阈值条件的参与者根据数据提取器的权限提供部分或全部子份额。如果每个参与者都提供了全部子份额,则可以完全重建秘密和载波图像。如果每个参与者提供部分子份额,则可以部分重建秘密和载波图像。实验结果表明,所提出的方案具有渐进性、高安全性和较大的嵌入率(ER)。同时,ER 不受载波图像的影响。
{"title":"Progressive reversible data hiding in encrypted images based on polynomial secret sharing and Chinese remainder theorem","authors":"Chao Jiang, Minqing Zhang, Zongbao Jiang, Yongjun Kong, Fuqiang Di","doi":"10.1117/1.jei.33.4.043008","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043008","url":null,"abstract":"In the current distributed environment, reversible data hiding in encrypted images has the disadvantages of low security and nonprogressivity. To address this problem, a homomorphic embedding algorithm is proposed based on polynomial secret sharing (PSS) and Chinese remainder theorem. First, the image owner encrypts the carrier image in streaming encryption and sends it to the data hider. Then, the data hider utilizes PSS to split the carrier image into n shares. At the same time, extra secrets after SS are embedded into the carrier shares using homomorphism. After splitting by Chinese remainder theorem, every share of the embedded data is divided into some sub-shares and then distributed to the participants. The participants that satisfy the threshold condition provide part or all of the sub-shares according to the authority of the data extractor. If each participant provides all sub-shares, the secrets and carrier image can be reconstructed completely. If each participant provides part of the sub-shares, the secrets and carrier image can be reconstructed partly. The experimental results show that the proposed scheme has progressivity, high security, and a large embedding rate (ER). Meanwhile, the ER is not affected by the carrier image.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Electronic Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1