首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
Learned fractional downsampling network for adaptive video streaming 用于自适应视频流的学习型分数下采样网络
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-22 DOI: 10.1016/j.image.2024.117172
Li-Heng Chen , Christos G. Bampis , Zhi Li , Joel Sole , Chao Chen , Alan C. Bovik

Given increasing demand for very large format contents and displays, spatial resolution changes have become an important part of video streaming. In particular, video downscaling is a key ingredient that streaming providers implement in their encoding pipeline as part of video quality optimization workflows. Here, we propose a downsampling network architecture that progressively reconstructs residuals at different scales. Since the layers of convolutional neural networks (CNNs) can only be used to alter the resolutions of their inputs by integer scale factors, we seek new ways to achieve fractional scaling, which is crucial in many video processing applications. More concretely, we utilize an alternative building block, formulated as a conventional convolutional layer followed by a differentiable resizer. To validate the efficacy of our proposed downsampling network, we integrated it into a modern video encoding system for adaptive streaming. We extensively evaluated our method using a variety of different video codecs and upsampling algorithms to show its generality. The experimental results show that improvements in coding efficiency over the conventional Lanczos algorithm and state-of-the-art methods are attained, in terms of PSNR, SSIM, and VMAF, when tested on high-resolution test videos. In addition to quantitative experiments, we also carried out a subjective quality study, validating that the proposed downsampling model yields favorable results.

鉴于对超大格式内容和显示器的需求日益增长,空间分辨率变化已成为视频流的重要组成部分。特别是,视频降频是流媒体提供商在其编码管道中作为视频质量优化工作流程的一部分实施的关键要素。在这里,我们提出了一种降采样网络架构,可逐步重建不同尺度的残差。由于卷积神经网络(CNN)的各层只能通过整数比例因子来改变其输入的分辨率,因此我们寻求新的方法来实现分数缩放,这在许多视频处理应用中至关重要。更具体地说,我们采用了另一种构建模块,即在传统卷积层之后加上一个可微分调整器。为了验证我们提出的降采样网络的功效,我们将其集成到一个现代视频编码系统中,用于自适应流媒体。我们使用各种不同的视频编解码器和上采样算法对我们的方法进行了广泛评估,以显示其通用性。实验结果表明,与传统的 Lanczos 算法和最先进的方法相比,在高分辨率测试视频的 PSNR、SSIM 和 VMAF 方面,我们的编码效率都有所提高。除了定量实验,我们还进行了主观质量研究,验证了所提出的降采样模型能产生良好的效果。
{"title":"Learned fractional downsampling network for adaptive video streaming","authors":"Li-Heng Chen ,&nbsp;Christos G. Bampis ,&nbsp;Zhi Li ,&nbsp;Joel Sole ,&nbsp;Chao Chen ,&nbsp;Alan C. Bovik","doi":"10.1016/j.image.2024.117172","DOIUrl":"10.1016/j.image.2024.117172","url":null,"abstract":"<div><p>Given increasing demand for very large format contents and displays, spatial resolution changes have become an important part of video streaming. In particular, video downscaling is a key ingredient that streaming providers implement in their encoding pipeline as part of video quality optimization workflows. Here, we propose a downsampling network architecture that progressively reconstructs residuals at different scales. Since the layers of convolutional neural networks (CNNs) can only be used to alter the resolutions of their inputs by integer scale factors, we seek new ways to achieve fractional scaling, which is crucial in many video processing applications. More concretely, we utilize an alternative building block, formulated as a conventional convolutional layer followed by a differentiable resizer. To validate the efficacy of our proposed downsampling network, we integrated it into a modern video encoding system for adaptive streaming. We extensively evaluated our method using a variety of different video codecs and upsampling algorithms to show its generality. The experimental results show that improvements in coding efficiency over the conventional Lanczos algorithm and state-of-the-art methods are attained, in terms of PSNR, SSIM, and VMAF, when tested on high-resolution test videos. In addition to quantitative experiments, we also carried out a subjective quality study, validating that the proposed downsampling model yields favorable results.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117172"},"PeriodicalIF":3.4,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141772969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A modified hue and range preserving color assignment function with a component-wise saturation adjustment for color image enhancement 修改后的色调和范围保护色彩分配功能,带有分量饱和度调整功能,用于增强彩色图像效果
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-19 DOI: 10.1016/j.image.2024.117174
Sepideh Khormaeipour, Fatemeh Shakeri

This paper presents a new approach to enhancing color images by modifying an affine color assignment function. This function maps colors to pixels of the enhanced gray-scale image in a way that improves the visual quality of the image, particularly in darker regions. The main goal of our method is to finely adjust saturation, correct saturation loss in specific image regions, and preserve the original image’s range and hue. Our proposed method follows a two-step process. First, it enhances the intensity image using a combination of global and local histogram equalization methods. This results in an overall improved appearance by redistributing pixel intensities and enhancing contrast. Then, modified color mapping functions are applied to assign colors to each pixel of the enhanced gray-scale image. The aim is to adjust saturation by amplifying the maximally saturated color image. Additionally, we introduce two new color-weighted maps to evaluate pixel importance from the maximally saturated image. This contributes to saturation control in the final enhanced image. Compared to alternative color mapping algorithms, our model preserves the original color of pixels in challenging areas and fine-tunes saturation based on parameter settings.

本文介绍了一种通过修改仿射色彩赋值函数来增强彩色图像的新方法。该函数将颜色映射到增强灰度图像的像素上,从而提高图像的视觉质量,尤其是在较暗的区域。我们方法的主要目标是精细调整饱和度,纠正特定图像区域的饱和度损失,并保留原始图像的范围和色调。我们提出的方法分为两个步骤。首先,它采用全局和局部直方图均衡化相结合的方法来增强图像的强度。这样,通过重新分配像素强度和增强对比度,整体上改善了外观。然后,应用修改后的颜色映射函数,为增强灰度图像的每个像素分配颜色。这样做的目的是通过放大饱和度最高的彩色图像来调整饱和度。此外,我们还引入了两个新的颜色加权映射,以评估最大饱和度图像中像素的重要性。这有助于控制最终增强图像的饱和度。与其他颜色映射算法相比,我们的模型保留了具有挑战性区域像素的原始颜色,并根据参数设置对饱和度进行了微调。
{"title":"A modified hue and range preserving color assignment function with a component-wise saturation adjustment for color image enhancement","authors":"Sepideh Khormaeipour,&nbsp;Fatemeh Shakeri","doi":"10.1016/j.image.2024.117174","DOIUrl":"10.1016/j.image.2024.117174","url":null,"abstract":"<div><p>This paper presents a new approach to enhancing color images by modifying an affine color assignment function. This function maps colors to pixels of the enhanced gray-scale image in a way that improves the visual quality of the image, particularly in darker regions. The main goal of our method is to finely adjust saturation, correct saturation loss in specific image regions, and preserve the original image’s range and hue. Our proposed method follows a two-step process. First, it enhances the intensity image using a combination of global and local histogram equalization methods. This results in an overall improved appearance by redistributing pixel intensities and enhancing contrast. Then, modified color mapping functions are applied to assign colors to each pixel of the enhanced gray-scale image. The aim is to adjust saturation by amplifying the maximally saturated color image. Additionally, we introduce two new color-weighted maps to evaluate pixel importance from the maximally saturated image. This contributes to saturation control in the final enhanced image. Compared to alternative color mapping algorithms, our model preserves the original color of pixels in challenging areas and fine-tunes saturation based on parameter settings.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117174"},"PeriodicalIF":3.4,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MGFA : A multi-scale global feature autoencoder to fuse infrared and visible images MGFA:融合红外和可见光图像的多尺度全局特征自动编码器
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-14 DOI: 10.1016/j.image.2024.117168
Xiaoxuan Chen , Shuwen Xu , Shaohai Hu , Xiaole Ma

Since the convolutional operation pays too much attention to local information, resulting in the loss of global information and a decline in fusion quality. In order to ensure that the fused image fully captures the features of the entire scene, an end-to-end Multi-scale Global Feature Autoencoder (MGFA) is proposed in this paper, which can generate fused images with both global and local information. In this network, a multi-scale global feature extraction module is proposed, which combines dilated convolutional modules with the Global Context Block (GCBlock) to extract the global features ignored by the convolutional operation. In addition, an adaptive embedded residual fusion module is proposed to fuse different frequency components in the source images with the idea of embedded residual learning. This can enrich the detailed texture of the fused results. Extensive qualitative and quantitative experiments have demonstrated that the proposed method can achieve excellent results in retaining global information and improving visual effects. Furthermore, the fused images obtained in this paper are more adapted to the object detection task and can assist in improving the precision of detection.

由于卷积运算过于关注局部信息,导致全局信息丢失,融合质量下降。为了确保融合后的图像能充分捕捉整个场景的特征,本文提出了一种端到端的多尺度全局特征自动编码器(MGFA),它能生成同时包含全局和局部信息的融合图像。在该网络中,提出了一个多尺度全局特征提取模块,它将扩张卷积模块与全局上下文块(GCBlock)相结合,以提取卷积运算忽略的全局特征。此外,还提出了自适应嵌入式残差融合模块,利用嵌入式残差学习的思想融合源图像中的不同频率成分。这可以丰富融合结果的细节纹理。广泛的定性和定量实验表明,所提出的方法在保留全局信息和改善视觉效果方面都能取得很好的效果。此外,本文获得的融合图像更适应物体检测任务,有助于提高检测精度。
{"title":"MGFA : A multi-scale global feature autoencoder to fuse infrared and visible images","authors":"Xiaoxuan Chen ,&nbsp;Shuwen Xu ,&nbsp;Shaohai Hu ,&nbsp;Xiaole Ma","doi":"10.1016/j.image.2024.117168","DOIUrl":"10.1016/j.image.2024.117168","url":null,"abstract":"<div><p>Since the convolutional operation pays too much attention to local information, resulting in the loss of global information and a decline in fusion quality. In order to ensure that the fused image fully captures the features of the entire scene, an end-to-end Multi-scale Global Feature Autoencoder (MGFA) is proposed in this paper, which can generate fused images with both global and local information. In this network, a multi-scale global feature extraction module is proposed, which combines dilated convolutional modules with the Global Context Block (GCBlock) to extract the global features ignored by the convolutional operation. In addition, an adaptive embedded residual fusion module is proposed to fuse different frequency components in the source images with the idea of embedded residual learning. This can enrich the detailed texture of the fused results. Extensive qualitative and quantitative experiments have demonstrated that the proposed method can achieve excellent results in retaining global information and improving visual effects. Furthermore, the fused images obtained in this paper are more adapted to the object detection task and can assist in improving the precision of detection.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117168"},"PeriodicalIF":3.4,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141637429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
USteg-DSE: Universal quantitative Steganalysis framework using Densenet merged with Squeeze & Excitation net USteg-DSE:使用与挤压和激励网合并的 DenseNet 的通用定量隐写分析框架
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-11 DOI: 10.1016/j.image.2024.117171
Anuradha Singhal, Punam Bedi

Carrying concealed communication via media is termed as steganography and unraveling details of such covert transmission is known as steganalysis. Extracting details of hidden message like length, position, embedding algorithm etc. forms part of forensic steganalysis. Predicting length of payload in camouflaged interchange is termed as quantitative steganalysis and is an indispensable tool for forensic investigators. When payload length is estimated without any prior knowledge about cover media or used steganography algorithm, it is termed as universal quantitative steganalysis.

Most of existing frameworks on quantitative steganalysis available in literature, work for a specific embedding algorithm or are domain specific. In this paper we propose and present USteg-DSE, a deep learning framework for performing universal quantitative image steganalysis using DenseNet with Squeeze & Excitation module (SEM). In deep learning techniques, deeper networks easily capture complex statistical properties. But as depth increases, networks suffer from vanishing gradient problem. In classic architectures, all channels are equally weighted to produce feature maps. Presented USteg-DSE framework overcomes these problems by using DenseNet and SEM. In DenseNet, each layer is directly connected with every other layer. DenseNet makes information and gradient flow easier with fewer feature maps. SEM incorporates content aware mechanism to adaptively regulate weight for every feature map. Presented framework has been compared with existing state-of-the-art techniques for spatial domain as well as transform domain and show better results in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE).

通过媒体进行隐蔽通信被称为隐写术,而揭示这种隐蔽传输的细节被称为隐分析。提取隐藏信息的细节,如长度、位置、嵌入算法等,是法医隐写分析的一部分。预测伪装交换中有效载荷的长度被称为定量隐写分析,是法证调查人员不可或缺的工具。当有效载荷长度的估算不需要事先了解封面介质或所使用的隐写算法时,它被称为通用定量隐写分析。在本文中,我们提出并介绍了 USteg-DSE,这是一种利用带有挤压& 激励模块(SEM)的密集网络(DenseNet)进行通用定量图像隐写分析的深度学习框架。在深度学习技术中,深度网络可以轻松捕捉复杂的统计特性。但随着深度的增加,网络会出现梯度消失问题。在经典架构中,所有通道的权重相同,从而生成特征图。目前提出的 USteg-DSE 框架通过使用 DenseNet 和 SEM 克服了这些问题。在 DenseNet 中,每一层都与其他每一层直接相连。DenseNet 使信息和梯度流动更容易,特征图更少。SEM 结合了内容感知机制,可以自适应地调节每个特征图的权重。所提出的框架与现有的最先进的空间域和变换域技术进行了比较,在平均绝对误差(MAE)和平均平方误差(MSE)方面显示出更好的效果。
{"title":"USteg-DSE: Universal quantitative Steganalysis framework using Densenet merged with Squeeze & Excitation net","authors":"Anuradha Singhal,&nbsp;Punam Bedi","doi":"10.1016/j.image.2024.117171","DOIUrl":"10.1016/j.image.2024.117171","url":null,"abstract":"<div><p>Carrying concealed communication via media is termed as steganography and unraveling details of such covert transmission is known as steganalysis. Extracting details of hidden message like length, position, embedding algorithm etc. forms part of forensic steganalysis. Predicting length of payload in camouflaged interchange is termed as quantitative steganalysis and is an indispensable tool for forensic investigators. When payload length is estimated without any prior knowledge about cover media or used steganography algorithm, it is termed as universal quantitative steganalysis.</p><p>Most of existing frameworks on quantitative steganalysis available in literature, work for a specific embedding algorithm or are domain specific. In this paper we propose and present USteg-DSE, a deep learning framework for performing universal quantitative image steganalysis using DenseNet with Squeeze &amp; Excitation module (SEM). In deep learning techniques, deeper networks easily capture complex statistical properties. But as depth increases, networks suffer from vanishing gradient problem. In classic architectures, all channels are equally weighted to produce feature maps. Presented USteg-DSE framework overcomes these problems by using DenseNet and SEM. In DenseNet, each layer is directly connected with every other layer. DenseNet makes information and gradient flow easier with fewer feature maps. SEM incorporates content aware mechanism to adaptively regulate weight for every feature map. Presented framework has been compared with existing state-of-the-art techniques for spatial domain as well as transform domain and show better results in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE).</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117171"},"PeriodicalIF":3.4,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141705212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning sparse feature representation for blind quality assessment of night-time images 学习稀疏特征表示,实现夜间图像质量盲评估
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-11 DOI: 10.1016/j.image.2024.117167
Maryam Karimi , Mansour Nejati

Capturing Night-Time Images (NTIs) with high-quality is quite challenging for consumer photography and several practical applications. Thus, addressing the quality assessment of night-time images is urgently needed. Since there is no available reference image for such images, Night-Time image Quality Assessment (NTQA) should be done blindly. Although Blind natural Image Quality Assessment (BIQA) has attracted a great deal of attention for a long time, very little work has been done in the field of NTQA. Due to the capturing conditions, NTIs suffer from various complex authentic distortions that make it a challenging field of research. Therefore, previous BIQA methods, do not provide sufficient correlation with subjective scores in the case of NTIs and special methods of NTQA should be developed. In this paper we conduct an unsupervised feature learning method for blind quality assessment of night-time images. The features are the sparse representation over the data-adaptive dictionaries learned on the image exposure and gradient magnitude maps. Having these features, an ensemble regression model trained using least squares gradient boosting scheme predicts high correlated objective scores on the standard datasets.

对于消费者摄影和一些实际应用来说,高质量地捕捉夜间图像(NTI)是一项相当大的挑战。因此,迫切需要解决夜间图像的质量评估问题。由于此类图像没有可用的参考图像,因此夜间图像质量评估(NTQA)应采用盲法。尽管盲法自然图像质量评估(BIQA)长期以来一直备受关注,但在 NTQA 领域却鲜有研究。由于拍摄条件的限制,NTIs 会出现各种复杂的真实失真,这使其成为一个具有挑战性的研究领域。因此,以往的 BIQA 方法无法为 NTIs 提供足够的主观评分相关性,因此需要开发特殊的 NTQA 方法。在本文中,我们采用了一种无监督特征学习方法,用于夜间图像的盲质量评估。这些特征是在图像曝光和梯度幅度图上学习的数据自适应字典的稀疏表示。有了这些特征,使用最小二乘梯度提升方案训练的集合回归模型就能预测标准数据集上的高相关客观分数。
{"title":"Learning sparse feature representation for blind quality assessment of night-time images","authors":"Maryam Karimi ,&nbsp;Mansour Nejati","doi":"10.1016/j.image.2024.117167","DOIUrl":"10.1016/j.image.2024.117167","url":null,"abstract":"<div><p>Capturing Night-Time Images (NTIs) with high-quality is quite challenging for consumer photography and several practical applications. Thus, addressing the quality assessment of night-time images is urgently needed. Since there is no available reference image for such images, Night-Time image Quality Assessment (NTQA) should be done blindly. Although Blind natural Image Quality Assessment (BIQA) has attracted a great deal of attention for a long time, very little work has been done in the field of NTQA. Due to the capturing conditions, NTIs suffer from various complex authentic distortions that make it a challenging field of research. Therefore, previous BIQA methods, do not provide sufficient correlation with subjective scores in the case of NTIs and special methods of NTQA should be developed. In this paper we conduct an unsupervised feature learning method for blind quality assessment of night-time images. The features are the sparse representation over the data-adaptive dictionaries learned on the image exposure and gradient magnitude maps. Having these features, an ensemble regression model trained using least squares gradient boosting scheme predicts high correlated objective scores on the standard datasets.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117167"},"PeriodicalIF":3.4,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141630394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prior-DualGAN: Rain rendering from coarse to fine Prior-DualGAN:从粗到细的雨水渲染
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-11 DOI: 10.1016/j.image.2024.117170
Mingdi Hu , Jingbing Yang , Jianxun Yu , Bingyi Jing

The success of deep neural networks (DNN) in deraining has led to increased research in rain rendering. In this paper, we introduce a novel Prior-DualGAN algorithm to synthesize diverse and realistic rainy/non-rainy image pairs to improve DNN training for deraining. More precisely, the rain streak prior is first generated using essential rain streak attributes; then more realistic and diverse rain streak patterns are rendered by the first generator; finally, the second generator naturally fuses the background and generated rain streaks to produce the final rainy images. Our method has two main advantages: (1) the rain streak prior enables the network to incorporate physical prior knowledge, accelerating network convergence; (2) our dual GAN approach gradually improves the naturalness and diversity of synthesized rainy images from rain streak synthesis to rainy image synthesis. We evaluate existing deraining algorithms using our generated rain-augmented datasets Rain100L, Rain14000, and Rain-Vehicle, verifying that training with our generated rain-augmented datasets significantly improves the deraining effect. The source code will be released shortly after article’s acceptance.

深度神经网络(DNN)在降雨渲染(deraining)方面取得的成功,促进了对降雨渲染的研究。在本文中,我们引入了一种新颖的 "先验-双GAN "算法,用于合成多样、逼真的雨天/非雨天图像对,以改进 DNN 的渲染训练。更确切地说,首先利用雨条纹的基本属性生成雨条纹先验,然后由第一个生成器渲染更逼真、更多样的雨条纹图案,最后由第二个生成器自然融合背景和生成的雨条纹,生成最终的雨景图像。我们的方法有两大优势:(1) 雨条纹先验使网络能够结合物理先验知识,加速网络收敛;(2) 从雨条纹合成到雨图像合成,我们的双 GAN 方法逐步提高了合成雨图像的自然度和多样性。我们使用生成的雨增数据集 Rain100L、Rain14000 和 Rain-Vehicle 评估了现有的衍生算法,验证了使用我们生成的雨增数据集进行训练能显著提高衍生效果。源代码将在文章录用后不久发布。
{"title":"Prior-DualGAN: Rain rendering from coarse to fine","authors":"Mingdi Hu ,&nbsp;Jingbing Yang ,&nbsp;Jianxun Yu ,&nbsp;Bingyi Jing","doi":"10.1016/j.image.2024.117170","DOIUrl":"10.1016/j.image.2024.117170","url":null,"abstract":"<div><p>The success of deep neural networks (<em>DNN</em>) in deraining has led to increased research in rain rendering. In this paper, we introduce a novel <em>Prior-DualGAN</em> algorithm to synthesize diverse and realistic rainy/non-rainy image pairs to improve <em>DNN</em> training for deraining. More precisely, the rain streak prior is first generated using essential rain streak attributes; then more realistic and diverse rain streak patterns are rendered by the first generator; finally, the second generator naturally fuses the background and generated rain streaks to produce the final rainy images. Our method has two main advantages: (1) the rain streak prior enables the network to incorporate physical prior knowledge, accelerating network convergence; (2) our dual <em>GAN</em> approach gradually improves the naturalness and diversity of synthesized rainy images from rain streak synthesis to rainy image synthesis. We evaluate existing deraining algorithms using our generated rain-augmented datasets <em>Rain100L</em>, <em>Rain14000</em>, and <em>Rain-Vehicle</em>, verifying that training with our generated rain-augmented datasets significantly improves the deraining effect. The source code will be released shortly after article’s acceptance.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117170"},"PeriodicalIF":3.4,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141704976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMNet: A learnable multi-scale cost volume for stereo matching LMNet:用于立体匹配的可学习多尺度成本量
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-11 DOI: 10.1016/j.image.2024.117169
Jiatao Liu , Yaping Zhang

Calculating disparities through stereo matching is an important step in a variety of machine vision tasks used for robotics and similar applications. The use of deep neural networks for stereo matching requires the construction of a matching cost volume. However, the occluded, non-textured, and reflective regions are ill-posed, which cannot be directly matched. In previous studies, a direct calculation has typically been used to measure matching costs for single-scale feature maps, which makes it difficult to predict disparity for ill-posed regions. Thus, we propose a learnable multi-scale matching cost calculation method (LMNet) to improve the accuracy of stereo matching. This learned matching cost can reasonably estimate the disparity of the regions that are conventionally difficult to match. Multi-level 3D dilation convolutions for multi-scale features are introduced during constructing cost volumes because the receptive field of the convolution kernels is limited. The experimental results show that the proposed method achieves significant improvement in ill-posed regions. Compared with the classical architecture GwcNet, End-Point-Error (EPE) of the proposed method on the Scene Flow dataset is reduced by 16.46%. The number of parameters and required calculations are also reduced by 8.71% and 20.05%, respectively. The proposed model code and pre-training parameters are available at: https://github.com/jt-liu/LMNet.

通过立体匹配计算差异是机器人和类似应用中各种机器视觉任务的重要步骤。使用深度神经网络进行立体匹配需要构建一个匹配成本量。然而,遮挡区域、非纹理区域和反射区域是不确定的,无法直接进行匹配。在以往的研究中,通常采用直接计算的方法来衡量单尺度特征图的匹配成本,这样就很难预测不确定区域的差距。因此,我们提出了一种可学习的多尺度匹配成本计算方法(LMNet),以提高立体匹配的准确性。这种学习匹配成本可以合理地估计传统上难以匹配的区域的差距。由于卷积核的感受野有限,因此在构建成本卷时引入了针对多尺度特征的多级三维扩张卷积。实验结果表明,所提出的方法在条件不佳的区域取得了显著的改进。与经典架构 GwcNet 相比,所提方法在场景流数据集上的终点错误率(EPE)降低了 16.46%。参数数量和所需计算量也分别减少了 8.71% 和 20.05%。建议的模型代码和预训练参数可在以下网址获取:https://github.com/jt-liu/LMNet。
{"title":"LMNet: A learnable multi-scale cost volume for stereo matching","authors":"Jiatao Liu ,&nbsp;Yaping Zhang","doi":"10.1016/j.image.2024.117169","DOIUrl":"10.1016/j.image.2024.117169","url":null,"abstract":"<div><p>Calculating disparities through stereo matching is an important step in a variety of machine vision tasks used for robotics and similar applications. The use of deep neural networks for stereo matching requires the construction of a matching cost volume. However, the occluded, non-textured, and reflective regions are ill-posed, which cannot be directly matched. In previous studies, a direct calculation has typically been used to measure matching costs for single-scale feature maps, which makes it difficult to predict disparity for ill-posed regions. Thus, we propose a learnable multi-scale matching cost calculation method (LMNet) to improve the accuracy of stereo matching. This learned matching cost can reasonably estimate the disparity of the regions that are conventionally difficult to match. Multi-level 3D dilation convolutions for multi-scale features are introduced during constructing cost volumes because the receptive field of the convolution kernels is limited. The experimental results show that the proposed method achieves significant improvement in ill-posed regions. Compared with the classical architecture GwcNet, End-Point-Error (EPE) of the proposed method on the Scene Flow dataset is reduced by 16.46%. The number of parameters and required calculations are also reduced by 8.71% and 20.05%, respectively. The proposed model code and pre-training parameters are available at: <span><span>https://github.com/jt-liu/LMNet</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117169"},"PeriodicalIF":3.4,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141698516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale strip-shaped convolution attention network for lightweight image super-resolution 用于轻量级图像超分辨率的多尺度条形卷积注意力网络
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-11 DOI: 10.1016/j.image.2024.117166
Ke Xu, Lulu Pan, Guohua Peng, Wenbo Zhang, Yanheng Lv, Guo Li, Lingxiao Li, Le Lei

Lightweight convolutional neural networks for Single Image Super-Resolution (SISR) have exhibited remarkable performance improvements in recent years. These models achieve excellent performance by relying on attention mechanisms that incorporate square-shaped convolutions to enhance feature representation. However, these approaches still suffer from redundancy which comes from square-shaped convolutional kernels and overlooks the utilization of multi-scale information. In this paper, we propose a novel attention mechanism called Multi-scale Strip-shaped convolution Attention (MSA), which utilizes three sets of differently sized depth-wise separable stripe convolution kernels in parallel to replace the redundant square-shaped convolution attention and extract multi-scale features. We also generalize MSA to other lightweight neural network models, and experimental results show that MSA outperforms other convolutional based attention mechanisms. Building upon MSA, we propose an Efficient Feature Extraction Block (EFEB), a lightweight block for SISR. Finally, based on EFEB, we propose a lightweight image super-resolution neural network named Multi-scale Strip-shaped convolution Attention Network (MSAN). Experiments demonstrate that MSAN outperforms existing state-of-the-art lightweight SR methods with fewer parameters and lower computational complexity.

近年来,用于单图像超分辨率(SISR)的轻量级卷积神经网络的性能有了显著提高。这些模型依靠包含方形卷积的注意力机制来增强特征表示,从而实现了出色的性能。然而,这些方法仍然存在方形卷积核带来的冗余问题,忽略了对多尺度信息的利用。在本文中,我们提出了一种名为 "多尺度条形卷积注意力"(MSA)的新型注意力机制,它利用三组大小不同的深度可分离条形卷积核并行取代冗余的方形卷积注意力,提取多尺度特征。我们还将 MSA 推广到其他轻量级神经网络模型,实验结果表明 MSA 优于其他基于卷积的注意力机制。在 MSA 的基础上,我们提出了高效特征提取块(EFEB),这是 SISR 的轻量级块。最后,基于 EFEB,我们提出了一种轻量级图像超分辨率神经网络,名为多尺度带状卷积注意力网络(MSAN)。实验证明,MSAN 以更少的参数和更低的计算复杂度超越了现有的最先进的轻量级 SR 方法。
{"title":"Multi-scale strip-shaped convolution attention network for lightweight image super-resolution","authors":"Ke Xu,&nbsp;Lulu Pan,&nbsp;Guohua Peng,&nbsp;Wenbo Zhang,&nbsp;Yanheng Lv,&nbsp;Guo Li,&nbsp;Lingxiao Li,&nbsp;Le Lei","doi":"10.1016/j.image.2024.117166","DOIUrl":"10.1016/j.image.2024.117166","url":null,"abstract":"<div><p>Lightweight convolutional neural networks for Single Image Super-Resolution (SISR) have exhibited remarkable performance improvements in recent years. These models achieve excellent performance by relying on attention mechanisms that incorporate square-shaped convolutions to enhance feature representation. However, these approaches still suffer from redundancy which comes from square-shaped convolutional kernels and overlooks the utilization of multi-scale information. In this paper, we propose a novel attention mechanism called Multi-scale Strip-shaped convolution Attention (MSA), which utilizes three sets of differently sized depth-wise separable stripe convolution kernels in parallel to replace the redundant square-shaped convolution attention and extract multi-scale features. We also generalize MSA to other lightweight neural network models, and experimental results show that MSA outperforms other convolutional based attention mechanisms. Building upon MSA, we propose an Efficient Feature Extraction Block (EFEB), a lightweight block for SISR. Finally, based on EFEB, we propose a lightweight image super-resolution neural network named Multi-scale Strip-shaped convolution Attention Network (MSAN). Experiments demonstrate that MSAN outperforms existing state-of-the-art lightweight SR methods with fewer parameters and lower computational complexity.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117166"},"PeriodicalIF":3.4,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141704349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A foreground-context dual-guided network for light-field salient object detection 用于光场突出物体检测的前景-语境双引导网络
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-26 DOI: 10.1016/j.image.2024.117165
Xin Zheng , Boyang Wang , Deyang Liu , Chengtao Lv , Jiebin Yan , Ping An

Light-field salient object detection (SOD) has become an emerging trend as it records comprehensive information about natural scenes that can benefit salient object detection in various ways. However, salient object detection models with light-field data as input have not been thoroughly explored. The existing methods cannot effectively suppress the noise, and it is difficult to distinguish the foreground and background under challenging conditions including self-similarity, complex backgrounds, large depth of field, and non-Lambertian scenarios. In order to extract the feature of light-field images effectively and suppress the noise in light-field, in this paper, we propose a foreground and context dual guided network. Specifically, we design a global context extraction module (GCEM) and a local foreground extraction module (LFEM). GCEM is used to suppress global noise and roughly predict saliency maps. GCEM also can extract global context information from deep-level features to guide decoding process. By extracting local information from shallow-level, LFEM refines the prediction obtained by GCEM. In addition, we use RGB images to enhance the light-field images before the input GCEM. Experimental results show that our proposed method is effective in suppressing global noise and achieves better results when dealing with transparent objects and complex backgrounds. The experimental results show that the proposed method outperforms several other state-of-the-art methods on three light-field datasets.

光场突出物体检测(SOD)已成为一种新兴趋势,因为它记录了自然场景的全面信息,能以各种方式帮助突出物体检测。然而,以光场数据为输入的突出物体检测模型尚未得到深入探讨。现有方法无法有效抑制噪声,而且在自相似性、复杂背景、大景深和非朗伯场景等挑战性条件下很难区分前景和背景。为了有效提取光场图像的特征并抑制光场噪声,本文提出了一种前景和背景双引导网络。具体来说,我们设计了一个全局上下文提取模块(GCEM)和一个局部前景提取模块(LFEM)。GCEM 用于抑制全局噪声并粗略预测显著性地图。GCEM 还能从深层特征中提取全局上下文信息,以指导解码过程。通过从浅层提取局部信息,LFEM 可以完善 GCEM 所获得的预测结果。此外,在输入 GCEM 之前,我们使用 RGB 图像来增强光场图像。实验结果表明,我们提出的方法能有效抑制全局噪声,在处理透明物体和复杂背景时能取得更好的效果。实验结果表明,在三个光场数据集上,我们提出的方法优于其他几种最先进的方法。
{"title":"A foreground-context dual-guided network for light-field salient object detection","authors":"Xin Zheng ,&nbsp;Boyang Wang ,&nbsp;Deyang Liu ,&nbsp;Chengtao Lv ,&nbsp;Jiebin Yan ,&nbsp;Ping An","doi":"10.1016/j.image.2024.117165","DOIUrl":"https://doi.org/10.1016/j.image.2024.117165","url":null,"abstract":"<div><p>Light-field salient object detection (SOD) has become an emerging trend as it records comprehensive information about natural scenes that can benefit salient object detection in various ways. However, salient object detection models with light-field data as input have not been thoroughly explored. The existing methods cannot effectively suppress the noise, and it is difficult to distinguish the foreground and background under challenging conditions including self-similarity, complex backgrounds, large depth of field, and non-Lambertian scenarios. In order to extract the feature of light-field images effectively and suppress the noise in light-field, in this paper, we propose a foreground and context dual guided network. Specifically, we design a global context extraction module (GCEM) and a local foreground extraction module (LFEM). GCEM is used to suppress global noise and roughly predict saliency maps. GCEM also can extract global context information from deep-level features to guide decoding process. By extracting local information from shallow-level, LFEM refines the prediction obtained by GCEM. In addition, we use RGB images to enhance the light-field images before the input GCEM. Experimental results show that our proposed method is effective in suppressing global noise and achieves better results when dealing with transparent objects and complex backgrounds. The experimental results show that the proposed method outperforms several other state-of-the-art methods on three light-field datasets.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117165"},"PeriodicalIF":3.4,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141540522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PGGNet: Pyramid gradual-guidance network for RGB-D indoor scene semantic segmentation PGGNet:用于 RGB-D 室内场景语义分割的金字塔渐导网络
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-22 DOI: 10.1016/j.image.2024.117164
Wujie Zhou , Gao Xu , Meixin Fang , Shanshan Mao , Rongwang Yang , Lu Yu

In RGB-D (red–green–blue and depth) scene semantic segmentation, depth maps provide rich spatial information to RGB images to achieve high performance. However, properly aggregating depth information and reducing noise and information loss during feature encoding after fusion are challenging aspects in scene semantic segmentation. To overcome these problems, we propose a pyramid gradual-guidance network for RGB-D indoor scene semantic segmentation. First, the quality of depth information is improved by a modality-enhancement fusion module and RGB image fusion. Then, the representation of semantic information is improved by multiscale operations. The two resulting adjacent features are used in a feature refinement module with an attention mechanism to extract semantic information. The features from adjacent modules are successively used to form an encoding pyramid, which can substantially reduce information loss and thereby ensure information integrity. Finally, we gradually integrate features at the same scale obtained from the encoding pyramid during decoding to obtain high-quality semantic segmentation. Experimental results obtained from two commonly used indoor scene datasets demonstrate that the proposed pyramid gradual-guidance network attains the highest level of performance in semantic segmentation, as compared to other existing methods.

在 RGB-D(红-绿-蓝和深度)场景语义分割中,深度图为 RGB 图像提供了丰富的空间信息,从而实现了高性能。然而,在场景语义分割中,如何正确聚合深度信息并减少融合后特征编码过程中的噪声和信息丢失是一个具有挑战性的问题。为了克服这些问题,我们提出了一种用于 RGB-D 室内场景语义分割的金字塔渐导网络。首先,通过模态增强融合模块和 RGB 图像融合提高深度信息的质量。然后,通过多尺度操作改进语义信息的表示。由此产生的两个相邻特征被用于带有注意力机制的特征提取模块,以提取语义信息。相邻模块的特征相继用于形成编码金字塔,可大大减少信息丢失,从而确保信息的完整性。最后,我们在解码过程中逐步整合从编码金字塔中获得的同一尺度的特征,从而获得高质量的语义分割。两个常用室内场景数据集的实验结果表明,与其他现有方法相比,所提出的金字塔渐进引导网络在语义分割方面达到了最高水平。
{"title":"PGGNet: Pyramid gradual-guidance network for RGB-D indoor scene semantic segmentation","authors":"Wujie Zhou ,&nbsp;Gao Xu ,&nbsp;Meixin Fang ,&nbsp;Shanshan Mao ,&nbsp;Rongwang Yang ,&nbsp;Lu Yu","doi":"10.1016/j.image.2024.117164","DOIUrl":"https://doi.org/10.1016/j.image.2024.117164","url":null,"abstract":"<div><p>In RGB-D (red–green–blue and depth) scene semantic segmentation, depth maps provide rich spatial information to RGB images to achieve high performance. However, properly aggregating depth information and reducing noise and information loss during feature encoding after fusion are challenging aspects in scene semantic segmentation. To overcome these problems, we propose a pyramid gradual-guidance network for RGB-D indoor scene semantic segmentation. First, the quality of depth information is improved by a modality-enhancement fusion module and RGB image fusion. Then, the representation of semantic information is improved by multiscale operations. The two resulting adjacent features are used in a feature refinement module with an attention mechanism to extract semantic information. The features from adjacent modules are successively used to form an encoding pyramid, which can substantially reduce information loss and thereby ensure information integrity. Finally, we gradually integrate features at the same scale obtained from the encoding pyramid during decoding to obtain high-quality semantic segmentation. Experimental results obtained from two commonly used indoor scene datasets demonstrate that the proposed pyramid gradual-guidance network attains the highest level of performance in semantic segmentation, as compared to other existing methods.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117164"},"PeriodicalIF":3.4,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141479791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1