首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
A dual fusion deep convolutional network for blind universal image denoising 用于盲通用图像去噪的双融合深度卷积网络
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-11-12 DOI: 10.1016/j.image.2023.117077
Zhiyu Lyu, Yan Chen, Haojun Sun, Yimin Hou

Blind image denoising and edge-preserving are two primary challenges to recover an image from low-level vision to high-level vision. Blind denoising requires a single denoiser can denoise images with any intensity of noise, and it has practical utility since accurate noise levels cannot be acquired from realistic images. On the other hand, edge preservation can provide more image features for subsequent processing which is also important for the denoising. In this paper, we propose a novel blind universal image denoiser to remove synthesis and realistic noise while preserving the image texture. The denoiser consists of noise network and prior network parallelly, and then a fusion block is used to give the weight between these two networks to balance computation cost and denoising performance. We also use the Non-subsampled Shearlet Transform (NSST) to enlarge the size of receptive field to obtain more detailed information. Extensive denoising experiments on synthetic images and realistic images show the effectiveness of our denoiser.

盲图像去噪和边缘保持是将图像从低级视觉恢复到高级视觉的两个主要问题。盲去噪要求单个去噪器可以对任意噪声强度的图像进行去噪,由于无法从真实图像中获得准确的噪声水平,因此具有实用价值。另一方面,边缘保留可以为后续处理提供更多的图像特征,这对去噪也很重要。本文提出了一种新的盲通用图像去噪方法,在保持图像纹理的同时去除合成噪声和真实噪声。该去噪器由噪声网络和先验网络并行组成,然后使用融合块在两个网络之间分配权重,以平衡计算成本和去噪性能。我们还使用非下采样Shearlet变换(NSST)来扩大接收野的大小,以获得更详细的信息。对合成图像和真实图像的去噪实验表明了该去噪方法的有效性。
{"title":"A dual fusion deep convolutional network for blind universal image denoising","authors":"Zhiyu Lyu,&nbsp;Yan Chen,&nbsp;Haojun Sun,&nbsp;Yimin Hou","doi":"10.1016/j.image.2023.117077","DOIUrl":"https://doi.org/10.1016/j.image.2023.117077","url":null,"abstract":"<div><p><span>Blind image denoising and edge-preserving are two primary challenges to recover an image from low-level vision to high-level vision. Blind denoising requires a single denoiser can denoise images with any intensity of noise, and it has practical utility since accurate noise levels cannot be acquired from realistic images. On the other hand, </span>edge preservation<span><span> can provide more image features for subsequent processing which is also important for the denoising. In this paper, we propose a novel blind universal image denoiser to remove synthesis and realistic noise while preserving the image texture. The denoiser consists of noise network and prior network parallelly, and then a fusion block is used to give the weight between these two networks to balance computation cost and denoising performance. We also use the Non-subsampled Shearlet Transform (NSST) to enlarge the size of receptive field to obtain more detailed information. Extensive denoising experiments on </span>synthetic images and realistic images show the effectiveness of our denoiser.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"120 ","pages":"Article 117077"},"PeriodicalIF":3.5,"publicationDate":"2023-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134656277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ClGanNet: A novel method for maize leaf disease identification using ClGan and deep CNN ClGanNet:基于ClGan和深度CNN的玉米叶片病害识别新方法
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-11-04 DOI: 10.1016/j.image.2023.117074
Vivek Sharma , Ashish Kumar Tripathi , Purva Daga , Nidhi M. , Himanshu Mittal

With the advancement of technologies, automatic plant leaf disease detection has received considerable attention from researchers working in the area of precision agriculture. A number of deep learning-based methods have been introduced in the literature for automated plant disease detection. However, the majority of datasets collected from real fields have blurred background information, data imbalances, less generalization, and tiny lesion features, which may lead to over-fitting of the model. Moreover, the increased parameter size of deep learning models is also a concern, especially for agricultural applications due to limited resources. In this paper, a novel ClGan (Crop Leaf Gan) with improved loss function has been developed with a reduced number of parameters as compared to the existing state-of-the-art methods. The generator and discriminator of the developed ClGan have been encompassed with an encoder–decoder network to avoid the vanishing gradient problem, training instability, and non-convergence failure while preserving complex intricacies during synthetic image generation with significant lesion differentiation. The proposed improved loss function introduces a dynamic correction factor that stabilizes learning while perpetuating effective weight optimization. In addition, a novel plant leaf classification method ClGanNet, has been introduced to classify plant diseases efficiently. The efficiency of the proposed ClGan was validated on the maize leaf dataset in terms of the number of parameters and FID score, and the results are compared against five other state-of-the-art GAN models namely, DC-GAN, W-GAN, WGanGP, InfoGan, and LeafGan. Moreover, the performance of the proposed classifier, ClGanNet, was evaluated with seven state-of-the-art methods against eight parameters on the original, basic augmented, and ClGan augmented datasets. Experimental results of ClGanNet have outperformed all the considered methods with 99.97% training and 99.04% testing accuracy while using the least number of parameters.

随着技术的进步,植物叶片病害自动检测受到了精准农业研究人员的广泛关注。文献中介绍了许多基于深度学习的植物病害自动检测方法。然而,大多数从真实领域采集的数据集存在背景信息模糊、数据不平衡、泛化程度低、病灶特征微小等问题,可能导致模型的过拟合。此外,由于资源有限,深度学习模型的参数大小增加也是一个问题,特别是在农业应用中。在本文中,与现有的最先进的方法相比,开发了一种具有改进损失函数的新型ClGan(作物叶片Gan),其参数数量减少。开发的ClGan的生成器和鉴别器包含了一个编码器-解码器网络,以避免梯度消失问题、训练不稳定和非收敛失败,同时保留了合成图像生成过程中复杂的复杂性。所提出的改进损失函数引入了一个动态校正因子,在保持有效权优化的同时稳定学习。此外,还引入了一种新的植物叶片分类方法ClGanNet,对植物病害进行有效分类。根据参数数量和FID评分,在玉米叶片数据集上验证了所提出的ClGan的效率,并将结果与其他五种最先进的GAN模型(DC-GAN、W-GAN、WGanGP、InfoGan和LeafGan)进行了比较。此外,在原始数据集、基本增强数据集和ClGan增强数据集上,用7种最先进的方法对8个参数进行了性能评估。实验结果表明,ClGanNet在使用最少参数的情况下,以99.97%的训练精度和99.04%的测试精度优于所有考虑的方法。
{"title":"ClGanNet: A novel method for maize leaf disease identification using ClGan and deep CNN","authors":"Vivek Sharma ,&nbsp;Ashish Kumar Tripathi ,&nbsp;Purva Daga ,&nbsp;Nidhi M. ,&nbsp;Himanshu Mittal","doi":"10.1016/j.image.2023.117074","DOIUrl":"https://doi.org/10.1016/j.image.2023.117074","url":null,"abstract":"<div><p>With the advancement of technologies, automatic plant leaf disease detection has received considerable attention from researchers working in the area of precision agriculture. A number of deep learning-based methods have been introduced in the literature for automated plant disease detection. However, the majority of datasets collected from real fields have blurred background information, data imbalances, less generalization, and tiny lesion features, which may lead to over-fitting of the model. Moreover, the increased parameter size of deep learning models is also a concern, especially for agricultural applications due to limited resources. In this paper, a novel ClGan (Crop Leaf Gan) with improved loss function has been developed with a reduced number of parameters as compared to the existing state-of-the-art methods. The generator and discriminator of the developed ClGan have been encompassed with an encoder–decoder network to avoid the vanishing gradient problem, training instability, and non-convergence failure while preserving complex intricacies during synthetic image generation with significant lesion differentiation. The proposed improved loss function introduces a dynamic correction factor that stabilizes learning while perpetuating effective weight optimization. In addition, a novel plant leaf classification method ClGanNet, has been introduced to classify plant diseases efficiently. The efficiency of the proposed ClGan was validated on the maize leaf dataset in terms of the number of parameters and FID score, and the results are compared against five other state-of-the-art GAN models namely, DC-GAN, W-GAN, <span><math><mrow><mi>W</mi><mi>G</mi><mi>a</mi><msub><mrow><mi>n</mi></mrow><mrow><mi>G</mi><mi>P</mi></mrow></msub></mrow></math></span>, InfoGan, and LeafGan. Moreover, the performance of the proposed classifier, ClGanNet, was evaluated with seven state-of-the-art methods against eight parameters on the original, basic augmented, and ClGan augmented datasets. Experimental results of ClGanNet have outperformed all the considered methods with 99.97% training and 99.04% testing accuracy while using the least number of parameters.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"120 ","pages":"Article 117074"},"PeriodicalIF":3.5,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91987222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image tone mapping based on clustering and human visual system models 基于聚类和人类视觉系统模型的图像色调映射
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-26 DOI: 10.1016/j.image.2023.117075
Xueyu Han , Ishtiaq Rasool Khan , Susanto Rahardja

Natural scenes generally have very high dynamic range (HDR) which cannot be captured in the standard dynamic range (SDR) images. HDR imaging techniques can be used to capture these details in both dark and bright regions, and the resultant HDR images can be tone mapped to reproduce them on SDR displays. To adapt to different applications, the tone mapping operator (TMO) should be able to achieve high performance for diverse HDR scenes. In this paper, we present a clustering-based TMO by embedding human visual system models that function effectively in different scenes. A hierarchical scheme is applied for clustering to reduce the computational complexity. We also propose a detail preservation method by superimposing the details of original HDR images to enhance local contrasts, and a color preservation method by limiting the adaptive saturation parameter to control the color saturation attenuating. The effectiveness of our method is assessed by comparing with state-of-the-art TMOs quantitatively on large-scale HDR datasets and qualitatively with a group of subjects. Experimental results of both objective and subjective evaluations show that the proposed method achieves improvements over the competing methods in generating high quality tone-mapped images with good contrast and natural color appearance for diverse HDR scenes.

自然场景通常具有非常高的动态范围(HDR),这是标准动态范围(SDR)图像无法捕捉到的。HDR成像技术可用于在暗区和亮区捕获这些细节,并且由此产生的HDR图像可以色调映射以在SDR显示器上再现它们。为了适应不同的应用,色调映射算子(TMO)应该能够在不同的HDR场景中实现高性能。在本文中,我们通过嵌入在不同场景中有效工作的人类视觉系统模型,提出了一种基于聚类的TMO。为了降低计算复杂度,采用了一种分层的聚类方法。我们还提出了一种通过叠加原始HDR图像的细节来增强局部对比度的细节保存方法,以及一种通过限制自适应饱和度参数来控制色彩饱和度衰减的颜色保存方法。通过在大规模HDR数据集上与最先进的TMOs进行定量比较,并与一组受试者进行定性比较,评估了我们方法的有效性。客观和主观评价的实验结果表明,该方法在生成高质量、对比度好、色彩自然的HDR场景色调映射图像方面取得了一定的进步。
{"title":"Image tone mapping based on clustering and human visual system models","authors":"Xueyu Han ,&nbsp;Ishtiaq Rasool Khan ,&nbsp;Susanto Rahardja","doi":"10.1016/j.image.2023.117075","DOIUrl":"10.1016/j.image.2023.117075","url":null,"abstract":"<div><p><span><span>Natural scenes generally have very high dynamic range (HDR) which cannot be captured in the standard dynamic range (SDR) images. HDR imaging techniques can be used to capture these details in both dark and bright regions, and the resultant HDR images can be tone mapped to reproduce them on SDR displays. To adapt to different applications, the tone mapping operator (TMO) should be able to achieve high performance for diverse HDR scenes. In this paper, we present a clustering-based TMO by embedding </span>human visual system models that function effectively in different scenes. A hierarchical scheme is applied for clustering to reduce the </span>computational complexity<span>. We also propose a detail preservation method by superimposing the details of original HDR images to enhance local contrasts, and a color preservation method by limiting the adaptive saturation parameter to control the color saturation attenuating. The effectiveness of our method is assessed by comparing with state-of-the-art TMOs quantitatively on large-scale HDR datasets and qualitatively with a group of subjects. Experimental results of both objective and subjective evaluations show that the proposed method achieves improvements over the competing methods in generating high quality tone-mapped images with good contrast and natural color appearance for diverse HDR scenes.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"120 ","pages":"Article 117075"},"PeriodicalIF":3.5,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136093478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual tooth segmentation in human teeth images using pseudo edge-region obtained by deep neural networks 利用深度神经网络获得的伪边缘区域分割人类牙齿图像中的单个牙齿
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-26 DOI: 10.1016/j.image.2023.117076
Seongeun Kim, Chang-Ock Lee

In human teeth images taken outside the oral cavity with a general optical camera, it is difficult to segment individual tooth due to common obstacles such as weak edges, intensity inhomogeneities and strong light reflections. In this work, we propose a method for segmenting individual tooth in human teeth images. The key to this method is to obtain pseudo edge-region using deep neural networks. After an additional step to obtain initial contours for each tooth region, the individual tooth is segmented by applying active contour models. We also present a strategy using existing model-based methods for labeling the data required for neural network training.

在普通光学相机拍摄的口腔外的人类牙齿图像中,由于常见的边缘弱、强度不均匀和强光反射等障碍,难以分割单个牙齿。在这项工作中,我们提出了一种人类牙齿图像中单个牙齿的分割方法。该方法的关键是利用深度神经网络获得伪边缘区域。在获得每个牙齿区域的初始轮廓后,通过应用活动轮廓模型对单个牙齿进行分割。我们还提出了一种策略,使用现有的基于模型的方法来标记神经网络训练所需的数据。
{"title":"Individual tooth segmentation in human teeth images using pseudo edge-region obtained by deep neural networks","authors":"Seongeun Kim,&nbsp;Chang-Ock Lee","doi":"10.1016/j.image.2023.117076","DOIUrl":"https://doi.org/10.1016/j.image.2023.117076","url":null,"abstract":"<div><p><span><span>In human teeth images taken outside the oral cavity with a general optical camera, it is difficult to segment individual tooth due to common obstacles such as weak edges, intensity inhomogeneities and strong light reflections. In this work, we propose a method for segmenting individual tooth in human teeth images. The key to this method is to obtain pseudo edge-region using </span>deep neural networks. After an additional step to obtain </span>initial contours<span><span> for each tooth region, the individual tooth is segmented by applying active contour models. We also present a strategy using existing model-based methods for labeling the data required for </span>neural network training.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"120 ","pages":"Article 117076"},"PeriodicalIF":3.5,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91987221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are metrics measuring what they should? An evaluation of Image Captioning task metrics 衡量标准是衡量他们应该做什么吗?图像字幕任务度量的评估
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-14 DOI: 10.1016/j.image.2023.117071
Othón González-Chávez , Guillermo Ruiz , Daniela Moctezuma , Tania Ramirez-delReal

Image Captioning is a current research task to describe the image content using the objects and their relationships in the scene. Two important research areas converge to tackle this task: artificial vision and natural language processing. In Image Captioning, as in any computational intelligence task, the performance metrics are crucial for knowing how well (or bad) a method performs. In recent years, it has been observed that classical metrics based on n-grams are insufficient to capture the semantics and the critical meaning to describe the content in an image. Looking to measure how well or not the current and more recent metrics are doing, in this article, we present an evaluation of several kinds of Image Captioning metrics and a comparison between them using the well-known datasets, MS-COCO and Flickr8k. The metrics were selected from the most used in prior works; they are those based on n-grams, such as BLEU, SacreBLEU, METEOR, ROGUE-L, CIDEr, SPICE, and those based on embeddings, such as BERTScore and CLIPScore. We designed two scenarios for this: (1) a set of artificially built captions with several qualities and (2) a comparison of some state-of-the-art Image Captioning methods. Interesting findings were found trying to answer the questions: Are the current metrics helping to produce high-quality captions? How do actual metrics compare to each other? What are the metrics really measuring?

图像字幕是当前的一项研究任务,旨在使用场景中的对象及其关系来描述图像内容。两个重要的研究领域共同致力于解决这一任务:人工视觉和自然语言处理。在图像字幕中,与任何计算智能任务一样,性能指标对于了解一种方法的性能是至关重要的。近年来,人们观察到,基于n-gram的经典度量不足以捕捉描述图像中内容的语义和关键意义。为了衡量当前和最近的指标表现如何,在本文中,我们对几种图像字幕指标进行了评估,并使用众所周知的数据集MS-COCO和Flickr8k对它们进行了比较。这些指标是从以前工作中使用最多的指标中选择的;它们是基于n-gram的,如BLEU、SacreBLEU、METEOR、ROGUE-L、CIDEr、SPICE,以及基于嵌入的,如BERTScore和CLIPScore。我们为此设计了两个场景:(1)一组具有多种质量的人工构建的字幕;(2)一些最先进的图像字幕方法的比较。有趣的发现试图回答这些问题:当前的指标是否有助于制作高质量的字幕?实际指标之间的比较如何?衡量标准到底是什么?
{"title":"Are metrics measuring what they should? An evaluation of Image Captioning task metrics","authors":"Othón González-Chávez ,&nbsp;Guillermo Ruiz ,&nbsp;Daniela Moctezuma ,&nbsp;Tania Ramirez-delReal","doi":"10.1016/j.image.2023.117071","DOIUrl":"https://doi.org/10.1016/j.image.2023.117071","url":null,"abstract":"<div><p><span>Image Captioning is a current research task to describe the image content using the objects and their relationships in the scene. Two important research areas converge to tackle this task: artificial vision and natural language processing. In Image Captioning, as in any computational intelligence task, the performance metrics are crucial for knowing how well (or bad) a method performs. In recent years, it has been observed that classical metrics based on </span><span><math><mi>n</mi></math></span>-grams are insufficient to capture the semantics and the critical meaning to describe the content in an image. Looking to measure how well or not the current and more recent metrics are doing, in this article, we present an evaluation of several kinds of Image Captioning metrics and a comparison between them using the well-known datasets, MS-COCO and Flickr8k. The metrics were selected from the most used in prior works; they are those based on <span><math><mi>n</mi></math></span>-grams, such as BLEU, SacreBLEU, METEOR, ROGUE-L, CIDEr, SPICE, and those based on embeddings, such as BERTScore and CLIPScore. We designed two scenarios for this: (1) a set of artificially built captions with several qualities and (2) a comparison of some state-of-the-art Image Captioning methods. Interesting findings were found trying to answer the questions: Are the current metrics helping to produce high-quality captions? How do actual metrics compare to each other? What are the metrics <em>really</em> measuring?</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"120 ","pages":"Article 117071"},"PeriodicalIF":3.5,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A transformer-based network for perceptual contrastive underwater image enhancement 基于变压器的感知对比水下图像增强网络
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-01 DOI: 10.1016/j.image.2023.117032
Na Cheng, Zhixuan Sun, Xuanbing Zhu, Hongyu Wang

Vision-based underwater image enhancement methods have received much attention for application in the fields of marine engineering and marine science. The absorption and scattering of light in real underwater scenes leads to severe information degradation in the acquired underwater images, thus limiting further development of underwater tasks. To solve these problems, a novel transformer-based perceptual contrastive network for underwater image enhancement methods (TPC-UIE) is proposed to achieve visually friendly and high-quality images, where contrastive learning is applied to the underwater image enhancement (UIE) task for the first time. Specifically, to address the limitations of the pure convolution-based network, we embed the transformer into the UIE network to improve its ability to capture global dependencies. Then, the limits of the transformer are then taken into account as convolution is reintroduced to better capture local attention. At the same time, the dual-attention module strengthens the network’s focus on the spatial and color channels that are more severely attenuated. Finally, a perceptual contrastive regularization method is proposed, where a multi-loss function made up of reconstruction loss, perceptual loss, and contrastive loss jointly optimizes the model to simultaneously ensure texture detail, contrast, and color consistency. Experimental results on several existing datasets show that the TPC-UIE obtains excellent performance in both subjective and objective evaluations compared to other methods. In addition, the visual quality of the underwater images is significantly improved by the enhancement of the method and effectively facilitates further development of the underwater task.

基于视觉的水下图像增强方法在海洋工程和海洋科学领域得到了广泛的应用。真实水下场景中光线的吸收和散射导致获取的水下图像信息严重退化,限制了水下任务的进一步开展。为了解决这些问题,提出了一种新的基于变压器的水下图像增强方法感知对比网络(TPC-UIE),并首次将对比学习应用于水下图像增强(UIE)任务中,以获得视觉友好的高质量图像。具体来说,为了解决纯基于卷积的网络的局限性,我们将转换器嵌入到UIE网络中,以提高其捕获全局依赖关系的能力。然后,当重新引入卷积以更好地捕获局部注意力时,考虑到变压器的限制。同时,双注意模块加强了网络对衰减较严重的空间和颜色通道的关注。最后,提出了一种感知对比正则化方法,由重建损失、感知损失和对比损失组成的多损失函数共同优化模型,同时保证纹理细节、对比度和颜色一致性。在多个已有数据集上的实验结果表明,与其他方法相比,TPC-UIE在主观和客观评价方面都取得了优异的成绩。此外,该方法的增强显著提高了水下图像的视觉质量,有效地促进了水下任务的进一步开展。
{"title":"A transformer-based network for perceptual contrastive underwater image enhancement","authors":"Na Cheng,&nbsp;Zhixuan Sun,&nbsp;Xuanbing Zhu,&nbsp;Hongyu Wang","doi":"10.1016/j.image.2023.117032","DOIUrl":"https://doi.org/10.1016/j.image.2023.117032","url":null,"abstract":"<div><p>Vision-based underwater image enhancement methods have received much attention for application in the fields of marine engineering and marine science. The absorption and scattering of light in real underwater scenes leads to severe information degradation in the acquired underwater images, thus limiting further development of underwater tasks. To solve these problems, a novel transformer-based perceptual contrastive network for underwater image enhancement methods (TPC-UIE) is proposed to achieve visually friendly and high-quality images, where contrastive learning<span> is applied to the underwater image enhancement (UIE) task for the first time. Specifically, to address the limitations of the pure convolution-based network, we embed the transformer into the UIE network to improve its ability to capture global dependencies. Then, the limits of the transformer are then taken into account as convolution is reintroduced to better capture local attention. At the same time, the dual-attention module strengthens the network’s focus on the spatial and color channels that are more severely attenuated. Finally, a perceptual contrastive regularization method is proposed, where a multi-loss function made up of reconstruction loss, perceptual loss, and contrastive loss jointly optimizes the model to simultaneously ensure texture detail, contrast, and color consistency. Experimental results on several existing datasets show that the TPC-UIE obtains excellent performance in both subjective and objective evaluations compared to other methods. In addition, the visual quality of the underwater images is significantly improved by the enhancement of the method and effectively facilitates further development of the underwater task.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"118 ","pages":"Article 117032"},"PeriodicalIF":3.5,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49896211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No-reference blurred image quality assessment method based on structure of structure features 基于结构特征的无参考模糊图像质量评估方法
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-01 DOI: 10.1016/j.image.2023.117008
Jian Chen , Shiyun Li , Li Lin , Jiaze Wan , Zuoyong Li

The deep structure in the image contains certain information of the image, which is helpful to perceive the quality of the image. Inspired by deep level image features extracted via deep learning methods, we propose a no-reference blurred image quality evaluation model based on the structure of structure features. In spatial domain, the novel weighted local binary patterns are proposed which leverage maximum local variation maps to extract structural features from multi-resolution images. In spectral domain, gradient information of multi-scale Log-Gabor filtered images is extracted as the structure of structure features, and combined with entropy features. Then, the features extracted from both domains are fused to form a quality perception feature vector and mapped into the quality score via support vector regression (SVR). Experiments are conducted to evaluate the performance of the proposed method on various IQA databases, including the LIVE, CSIQ, TID2008, TID2013, CID2013, CLIVE, and BID. The experimental results show that compared with some state-of-the-art methods, our proposed method achieves better evaluation results and is more in line with the human visual system. The source code will be released at https://github.com/JamesC0321/s2s_features/.

图像中的深层结构包含了图像的某些信息,有助于感知图像的质量。受深度学习方法提取的深层图像特征的启发,我们提出了一种基于结构特征结构的无参考模糊图像质量评估模型。在空间域中,提出了一种新的加权局部二进制模式,该模式利用最大局部变化图从多分辨率图像中提取结构特征。在谱域中,多尺度Log-Gabor滤波图像的梯度信息被提取为结构特征的结构,并与熵特征相结合。然后,将从两个领域提取的特征融合形成质量感知特征向量,并通过支持向量回归(SVR)映射到质量分数中。实验评估了所提出的方法在各种IQA数据库上的性能,包括LIVE、CSIQ、TID2008、TID2013、CID2013、CLIVE和BID。实验结果表明,与一些最先进的方法相比,我们提出的方法取得了更好的评价结果,更符合人类的视觉系统。源代码将在https://github.com/JamesC0321/s2s_features/.
{"title":"No-reference blurred image quality assessment method based on structure of structure features","authors":"Jian Chen ,&nbsp;Shiyun Li ,&nbsp;Li Lin ,&nbsp;Jiaze Wan ,&nbsp;Zuoyong Li","doi":"10.1016/j.image.2023.117008","DOIUrl":"https://doi.org/10.1016/j.image.2023.117008","url":null,"abstract":"<div><p><span><span><span><span><span><span>The deep structure in the image contains certain information of the image, which is helpful to perceive the quality of the image. Inspired by deep level image features extracted via </span>deep learning<span> methods, we propose a no-reference blurred image quality evaluation model based on the structure of structure features. In spatial domain, the novel weighted local binary patterns are proposed which leverage maximum local variation maps to extract structural features from multi-resolution images. In </span></span>spectral domain, </span>gradient information<span> of multi-scale Log-Gabor filtered images is extracted as the structure of structure features, and combined with entropy features. Then, the features extracted from both domains are fused to form a quality perception feature vector and mapped into the quality score via support vector regression (SVR). Experiments are conducted to evaluate the performance of the proposed method on various </span></span>IQA databases, including the LIVE, CSIQ, TID2008, TID2013, CID2013, CLIVE, and BID. The experimental results show that compared with some state-of-the-art methods, our proposed method achieves better evaluation results and is more in line with the </span>human visual system<span>. The source code will be released at </span></span><span>https://github.com/JamesC0321/s2s_features/</span><svg><path></path></svg>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"118 ","pages":"Article 117008"},"PeriodicalIF":3.5,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49844964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Magnifying multimodal forgery clues for Deepfake detection 放大多模态伪造线索的深度伪造检测
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-01 DOI: 10.1016/j.image.2023.117010
Xiaolong Liu, Yang Yu, Xiaolong Li, Yao Zhao

Advancements in computer vision and deep learning have led to difficulty in distinguishing the generated Deepfake media. In addition, recent forgery techniques also modify the audio information based on the forged video, which brings new challenges. However, due to the cross-modal bias, recent multimodal detection methods do not well explore the intra-modal and cross-modal forgery clues, which leads to limited detection performance. In this paper, we propose a novel audio-visual aware multimodal Deepfake detection framework to magnify intra-modal and cross-modal forgery clues. Firstly, to capture temporal intra-modal defects, Forgery Clues Magnification Transformer (FCMT) module is proposed to magnify forgery clues based on sequence-level relationships. Then, the Distribution Difference based Inconsistency Computing (DDIC) module based on Jensen–Shannon divergence is designed to adaptively align multimodal information for further magnifying the cross-modal inconsistency. Next, we further explore spatial artifacts by connecting multi-scale feature representation to provide comprehensive information. Finally, a feature fusion module is designed to adaptively fuse features to generate a more discriminative feature. Experiments demonstrate that the proposed framework outperforms independently trained models, and at the same time, yields superior generalization capability on unseen types of Deepfake.

计算机视觉和深度学习的进步导致难以区分生成的Deepfake媒体。此外,近年来的伪造技术也在伪造视频的基础上对音频信息进行修改,这给伪造带来了新的挑战。然而,由于存在跨模态偏差,目前的多模态检测方法不能很好地探索模内和跨模态的伪造线索,导致检测性能受到限制。在本文中,我们提出了一种新的视听感知多模态深度伪造检测框架,以放大模态内和跨模态伪造线索。首先,为了捕获时序内模态缺陷,提出伪造线索放大转换器(FCMT)模块,基于序列级关系对伪造线索进行放大;然后,设计了基于Jensen-Shannon散度的基于分布差的不一致计算(DDIC)模块,对多模态信息进行自适应对齐,进一步放大跨模态不一致;接下来,我们通过连接多尺度特征表示来进一步探索空间伪影,以提供全面的信息。最后,设计特征融合模块,自适应融合特征,生成更具判别性的特征。实验表明,提出的框架优于独立训练的模型,同时在未见过的Deepfake类型上产生卓越的泛化能力。
{"title":"Magnifying multimodal forgery clues for Deepfake detection","authors":"Xiaolong Liu,&nbsp;Yang Yu,&nbsp;Xiaolong Li,&nbsp;Yao Zhao","doi":"10.1016/j.image.2023.117010","DOIUrl":"https://doi.org/10.1016/j.image.2023.117010","url":null,"abstract":"<div><p><span>Advancements in computer vision<span><span> and deep learning have led to difficulty in distinguishing the generated Deepfake media. In addition, recent forgery techniques also modify the audio information based on the forged video, which brings new challenges. However, due to the cross-modal bias, recent multimodal detection methods do not well explore the intra-modal and cross-modal forgery clues, which leads to limited detection performance. In this paper, we propose a novel audio-visual aware multimodal Deepfake detection framework to magnify intra-modal and cross-modal forgery clues. Firstly, to capture temporal intra-modal defects, Forgery Clues Magnification Transformer (FCMT) module is proposed to magnify forgery clues based on sequence-level relationships. Then, the Distribution Difference based Inconsistency Computing (DDIC) module based on Jensen–Shannon divergence is designed to adaptively align </span>multimodal information for further magnifying the cross-modal inconsistency. Next, we further explore spatial artifacts by connecting multi-scale feature representation to provide comprehensive information. Finally, a </span></span>feature fusion<span> module is designed to adaptively fuse features to generate a more discriminative feature. Experiments demonstrate that the proposed framework outperforms independently trained models, and at the same time, yields superior generalization capability on unseen types of Deepfake.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"118 ","pages":"Article 117010"},"PeriodicalIF":3.5,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49881552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale graph neural network for global stereo matching 用于全局立体匹配的多尺度图神经网络
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-01 DOI: 10.1016/j.image.2023.117026
Xiaofeng Wang , Jun Yu , Zhiheng Sun , Jiameng Sun , Yingying Su

Currently, deep learning-based stereo matching is solely based on local convolution networks, which lack enough global information for accurate disparity estimation. Motivated by the excellent global representation of the graph, a novel Multi-scale Graph Neural Network (MGNN) is proposed to essentially improve stereo matching from the global aspect. Firstly, we construct the multi-scale graph structure, where the multi-scale nodes with projected multi-scale image features can be directly linked by the inner-scale and cross-scale edges, instead of solely relying on local convolutions for deep learning-based stereo matching. To enhance the spatial position information at non-Euclidean multi-scale graph space, we further propose a multi-scale position embedding to embed the potential position features of Euclidean space into projected multi-scale image features. Secondly, we propose the multi-scale graph feature inference to extract global context information on multi-scale graph structure. Thus, the features not only be globally inferred on each scale, but also can be interactively inferred across different scales to comprehensively consider global context information with multi-scale receptive fields. Finally, MGNN is deployed into dense stereo matching and experiments demonstrate that our method achieves state-of-the-art performance on Scene Flow, KITTI 2012/2015, and Middlebury Stereo Evaluation v.3/2021.

目前,基于深度学习的立体匹配仅基于局部卷积网络,缺乏足够的全局信息来进行准确的视差估计。基于图的良好全局表示,提出了一种新的多尺度图神经网络(MGNN),从全局角度改善了立体匹配。首先,我们构建了多尺度图结构,其中具有投影多尺度图像特征的多尺度节点可以通过内尺度和跨尺度边缘直接链接,而不是仅仅依靠局部卷积进行基于深度学习的立体匹配。为了增强非欧几里得多尺度图空间的空间位置信息,我们进一步提出了一种多尺度位置嵌入方法,将欧几里得空间的潜在位置特征嵌入到投影的多尺度图像特征中。其次,我们提出了多尺度图特征推理来提取多尺度图结构上的全局上下文信息。因此,特征不仅可以在每个尺度上全局推断,而且可以在不同尺度上交互推断,以综合考虑具有多尺度感受野的全局上下文信息。最后,MGNN被部署到密集立体匹配中,实验表明,我们的方法在场景流、KITTI 2012/2015和Middlebury立体声评估v.3/2021上实现了最先进的性能。
{"title":"Multi-scale graph neural network for global stereo matching","authors":"Xiaofeng Wang ,&nbsp;Jun Yu ,&nbsp;Zhiheng Sun ,&nbsp;Jiameng Sun ,&nbsp;Yingying Su","doi":"10.1016/j.image.2023.117026","DOIUrl":"https://doi.org/10.1016/j.image.2023.117026","url":null,"abstract":"<div><p>Currently, deep learning-based stereo matching<span><span> is solely based on local convolution networks, which lack enough global information for accurate disparity estimation. Motivated by the excellent global representation of the graph, a novel Multi-scale </span>Graph Neural Network<span><span> (MGNN) is proposed to essentially improve stereo matching from the global aspect. Firstly, we construct the multi-scale graph structure, where the multi-scale nodes with projected multi-scale image features<span> can be directly linked by the inner-scale and cross-scale edges, instead of solely relying on local convolutions for deep learning-based stereo matching. To enhance the spatial position information at non-Euclidean multi-scale graph space, we further propose a multi-scale </span></span>position embedding to embed the potential position features of Euclidean space into projected multi-scale image features. Secondly, we propose the multi-scale graph feature inference to extract global context information on multi-scale graph structure. Thus, the features not only be globally inferred on each scale, but also can be interactively inferred across different scales to comprehensively consider global context information with multi-scale receptive fields. Finally, MGNN is deployed into dense stereo matching and experiments demonstrate that our method achieves state-of-the-art performance on Scene Flow, KITTI 2012/2015, and Middlebury Stereo Evaluation v.3/2021.</span></span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"118 ","pages":"Article 117026"},"PeriodicalIF":3.5,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49844965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing transferability of adversarial examples with pixel-level scale variation 利用像素级尺度变化增强对抗性示例的可转移性
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-10-01 DOI: 10.1016/j.image.2023.117020
Zhongshu Mao , Yiqin Lu , Zhe Cheng , Xiong Shen

The transferability of adversarial examples under the black-box attack setting has attracted extensive attention from the community. Input transformation is one of the most effective approaches to improve the transferability among all methods proposed recently. However, existing methods either only slightly improve transferability or are not robust to defense models. We delve into the generation process of adversarial examples and find that existing input transformation methods tend to craft adversarial examples by transforming the entire image, which we term image-level transformations. This naturally motivates us to perform pixel-level transformations, i.e., transforming only part pixels of the image. Experimental results show that pixel-level transformations can considerably enhance the transferability of the adversarial examples while still being robust to defense models. We believe that pixel-level transformations are more fine-grained than image-level transformations, and thus can achieve better performance. Based on this finding, we propose the pixel-level scale variation (PSV) method to further improve the transferability of adversarial examples. The proposed PSV randomly samples a set of scaled mask matrices and transforms the part pixels of the input image with the matrices to increase the pixel-level diversity. Empirical evaluations on the standard ImageNet dataset demonstrate the effectiveness and superior performance of the proposed PSV both on the normally trained (with the highest average attack success rate of 79.2%) and defense models (with the highest average attack success rate of 61.4%). Our method can further improve transferability (with the highest average attack success rate of 88.2%) by combining it with other input transformation methods.

对抗性例子在黑匣子攻击环境下的可转移性引起了社会的广泛关注。在最近提出的所有方法中,输入转换是提高可转移性的最有效方法之一。然而,现有的方法要么只是略微提高了可转移性,要么对防御模型不健壮。我们深入研究了对抗性示例的生成过程,发现现有的输入转换方法倾向于通过转换整个图像来制作对抗性示例,我们称之为图像级转换。这自然促使我们执行像素级转换,即仅转换图像的部分像素。实验结果表明,像素级变换可以显著提高对抗性示例的可转移性,同时对防御模型仍然具有鲁棒性。我们相信,像素级转换比图像级转换更细粒度,因此可以获得更好的性能。基于这一发现,我们提出了像素级尺度变化(PSV)方法,以进一步提高对抗性示例的可转移性。所提出的PSV随机采样一组缩放的掩模矩阵,并用这些矩阵变换输入图像的部分像素,以增加像素级的多样性。在标准ImageNet数据集上的经验评估证明了所提出的PSV在正常训练(最高平均攻击成功率为79.2%)和防御模型(最高平均进攻成功率为61.4%)上的有效性和优越性能。我们的方法可以通过将其与其他输入转换方法相结合。
{"title":"Enhancing transferability of adversarial examples with pixel-level scale variation","authors":"Zhongshu Mao ,&nbsp;Yiqin Lu ,&nbsp;Zhe Cheng ,&nbsp;Xiong Shen","doi":"10.1016/j.image.2023.117020","DOIUrl":"https://doi.org/10.1016/j.image.2023.117020","url":null,"abstract":"<div><p>The transferability of adversarial examples under the black-box attack setting has attracted extensive attention from the community. Input transformation is one of the most effective approaches to improve the transferability among all methods proposed recently. However, existing methods either only slightly improve transferability or are not robust to defense models. We delve into the generation process of adversarial examples and find that existing input transformation methods tend to craft adversarial examples by transforming the entire image, which we term image-level transformations. This naturally motivates us to perform pixel-level transformations, i.e., transforming only part pixels of the image. Experimental results show that pixel-level transformations can considerably enhance the transferability of the adversarial examples while still being robust to defense models. We believe that pixel-level transformations are more fine-grained than image-level transformations, and thus can achieve better performance. Based on this finding, we propose the pixel-level scale variation (PSV) method to further improve the transferability of adversarial examples. The proposed PSV randomly samples a set of scaled mask matrices and transforms the part pixels of the input image with the matrices to increase the pixel-level diversity. Empirical evaluations on the standard ImageNet dataset demonstrate the effectiveness and superior performance of the proposed PSV both on the normally trained (with the highest average attack success rate of 79.2%) and defense models (with the highest average attack success rate of 61.4%). Our method can further improve transferability (with the highest average attack success rate of 88.2%) by combining it with other input transformation methods.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"118 ","pages":"Article 117020"},"PeriodicalIF":3.5,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49844961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1