首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
Infrared and visible image fusion based on hybrid multi-scale decomposition and adaptive contrast enhancement 基于混合多尺度分解和自适应对比度增强的红外与可见光图像融合
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-22 DOI: 10.1016/j.image.2024.117228
Yueying Luo, Kangjian He, Dan Xu, Hongzhen Shi, Wenxia Yin
Effectively fusing infrared and visible images enhances the visibility of infrared target information while capturing visual details. Balancing the brightness and contrast of the fusion image adequately has posed a significant challenge. Moreover, preserving detailed information in fusion images has been problematic. To address these issues, this paper proposes a fusion algorithm based on multi-scale decomposition and adaptive contrast enhancement. Initially, we present a hybrid multi-scale decomposition method aimed at extracting valuable information comprehensively from the source image. Subsequently, we advance an adaptive base layer optimization approach to regulate the brightness and contrast of the resultant fusion image. Lastly, we design a weight mapping rule grounded in saliency detection to integrate small-scale layers, thereby conserving the edge structure within the fusion outcome. Both qualitative and quantitative experimental results affirm the superiority of the proposed method over 11 state-of-the-art image fusion methods. Our method excels in preserving more texture and achieving higher contrast, which proves advantageous for monitoring tasks.
有效地融合红外图像和可见光图像可以提高红外目标信息的可见度,同时捕捉视觉细节。如何充分平衡融合图像的亮度和对比度是一项重大挑战。此外,在融合图像中保留细节信息也一直是个问题。为了解决这些问题,本文提出了一种基于多尺度分解和自适应对比度增强的融合算法。首先,我们提出了一种混合多尺度分解方法,旨在从源图像中全面提取有价值的信息。随后,我们推进了一种自适应基底层优化方法,以调节融合后图像的亮度和对比度。最后,我们设计了一种基于显著性检测的权重映射规则来整合小尺度层,从而在融合结果中保留边缘结构。定性和定量实验结果都证实了所提出的方法优于 11 种最先进的图像融合方法。我们的方法在保留更多纹理和实现更高对比度方面表现出色,这在监控任务中证明是有利的。
{"title":"Infrared and visible image fusion based on hybrid multi-scale decomposition and adaptive contrast enhancement","authors":"Yueying Luo,&nbsp;Kangjian He,&nbsp;Dan Xu,&nbsp;Hongzhen Shi,&nbsp;Wenxia Yin","doi":"10.1016/j.image.2024.117228","DOIUrl":"10.1016/j.image.2024.117228","url":null,"abstract":"<div><div>Effectively fusing infrared and visible images enhances the visibility of infrared target information while capturing visual details. Balancing the brightness and contrast of the fusion image adequately has posed a significant challenge. Moreover, preserving detailed information in fusion images has been problematic. To address these issues, this paper proposes a fusion algorithm based on multi-scale decomposition and adaptive contrast enhancement. Initially, we present a hybrid multi-scale decomposition method aimed at extracting valuable information comprehensively from the source image. Subsequently, we advance an adaptive base layer optimization approach to regulate the brightness and contrast of the resultant fusion image. Lastly, we design a weight mapping rule grounded in saliency detection to integrate small-scale layers, thereby conserving the edge structure within the fusion outcome. Both qualitative and quantitative experimental results affirm the superiority of the proposed method over 11 state-of-the-art image fusion methods. Our method excels in preserving more texture and achieving higher contrast, which proves advantageous for monitoring tasks.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117228"},"PeriodicalIF":3.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Struck-out handwritten word detection and restoration for automatic descriptive answer evaluation 用于自动描述性答案评估的划掉的手写单词检测和修复
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-30 DOI: 10.1016/j.image.2024.117214
Dajian Zhong , Shivakumara Palaiahnakote , Umapada Pal , Yue Lu
Unlike objective type evaluation, descriptive answer evaluation is challenging due to unpredictable answers and free writing style of answers. Because of these, descriptive answer evaluation has received special attention from many researchers. Automatic answer evaluation is useful for the following situations. It can avoid human intervention for marking, eliminates bias marking and most important is that it can save huge manpower. To develop an efficient and accurate system, there are several open challenges. One such open challenge is cleaning the document, which includes struck-out words removal and restoring the struck-out words. In this paper, we have proposed a system for struck-out handwritten word detection and restoration for automatic descriptive answer evaluation. The work has two stages. In the first stage, we explore the combination of ResNet50 and the diagonal line (principal and secondary diagonal lines) segmentation module for detecting words and then classifying struck-out words using a classification network. In the second stage, we explore the combination of U-Net as a backbone and Bi-LSTM for predicting pixels that represent actual text information of the struck-out words based on the relationship between sequences of pixels for restoration. Experimental results on our dataset and standard datasets show that the proposed model is impressive for struck-out word detection and restoration. A comparative study with the state-of-the-art methods shows that the proposed approach outperforms the existing models in terms of struck-out word detection and restoration.
与客观类型的评价不同,描述性答案评价因其答案的不可预测性和答案的自由写作风格而具有挑战性。因此,描述性答案评价受到了许多研究人员的特别关注。自动答案评价在以下情况下非常有用。它可以避免人工干预评分,消除评分偏差,最重要的是可以节省大量人力。要开发一个高效、准确的系统,还面临着一些挑战。其中一个挑战就是文档清理,包括删除被删除的单词和恢复被删除的单词。在本文中,我们提出了一种用于自动描述性答案评估的划掉手写单词检测和恢复系统。这项工作分为两个阶段。在第一阶段,我们探索将 ResNet50 和对角线(主对角线和次对角线)分割模块结合起来检测字词,然后使用分类网络对划去的字词进行分类。在第二阶段,我们探索将 U-Net 作为骨干网与 Bi-LSTM 结合,根据像素序列之间的关系预测代表被删除字词实际文本信息的像素,以进行还原。在我们的数据集和标准数据集上的实验结果表明,所提出的模型在检测和还原被删除的单词方面效果显著。与最先进方法的比较研究表明,所提出的方法在删除字检测和还原方面优于现有模型。
{"title":"Struck-out handwritten word detection and restoration for automatic descriptive answer evaluation","authors":"Dajian Zhong ,&nbsp;Shivakumara Palaiahnakote ,&nbsp;Umapada Pal ,&nbsp;Yue Lu","doi":"10.1016/j.image.2024.117214","DOIUrl":"10.1016/j.image.2024.117214","url":null,"abstract":"<div><div>Unlike objective type evaluation, descriptive answer evaluation is challenging due to unpredictable answers and free writing style of answers. Because of these, descriptive answer evaluation has received special attention from many researchers. Automatic answer evaluation is useful for the following situations. It can avoid human intervention for marking, eliminates bias marking and most important is that it can save huge manpower. To develop an efficient and accurate system, there are several open challenges. One such open challenge is cleaning the document, which includes struck-out words removal and restoring the struck-out words. In this paper, we have proposed a system for struck-out handwritten word detection and restoration for automatic descriptive answer evaluation. The work has two stages. In the first stage, we explore the combination of ResNet50 and the diagonal line (principal and secondary diagonal lines) segmentation module for detecting words and then classifying struck-out words using a classification network. In the second stage, we explore the combination of U-Net as a backbone and Bi-LSTM for predicting pixels that represent actual text information of the struck-out words based on the relationship between sequences of pixels for restoration. Experimental results on our dataset and standard datasets show that the proposed model is impressive for struck-out word detection and restoration. A comparative study with the state-of-the-art methods shows that the proposed approach outperforms the existing models in terms of struck-out word detection and restoration.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117214"},"PeriodicalIF":3.4,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-reference calibration-free image quality assessment 无需全面参考校准的图像质量评估
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-23 DOI: 10.1016/j.image.2024.117212
Paolo Giannitrapani, Elio D. Di Claudio , Giovanni Jacovitti
Objective Image Quality Assessment (IQA) methods often lack of linearity of their quality estimates with respect to scores expressed by human subjects and therefore IQA metrics undergo a calibration process based on subjective quality examples. However, example-based training presents a challenge in terms of generalization hampering result comparison across different applications and operative conditions. In this paper, new Full Reference (FR) techniques, providing estimates linearly correlated with human scores without using calibration are introduced. We show that on natural images, application of estimation theory and psychophysical principles to images degraded by Gaussian blur leads to a so-called canonical IQA method, whose estimates are linearly correlated to both the subjective scores and the viewing distance. Then, we show that any mainstream IQA methods can be reconducted to the canonical method by converting its metric based on a unique specimen image. The proposed scheme is extended to wide classes of degraded images, e.g. noisy and compressed images. The resulting calibration-free FR IQA methods allows for comparability and interoperability across different imaging systems and on different viewing distances. A comparison of their statistical performance with respect to state-of-the-art calibration prone methods is finally provided, showing that the presented model is a valid alternative to the final 5-parameter calibration step of IQA methods, and the two parameters of the model have a clear operational meaning and are simply determined in practical applications. The enhanced performance are achieved across multiple viewing distance databases by independently realigning the blur values associated with each distance.
客观图像质量评估(IQA)方法的质量估计值往往与人类受试者的评分缺乏线性关系,因此 IQA 指标需要经过基于主观质量示例的校准过程。然而,基于示例的训练在通用性方面存在挑战,妨碍了不同应用和手术条件下的结果比较。本文介绍了新的完全参考(FR)技术,无需校准即可提供与人类评分线性相关的估计值。我们表明,在自然图像上,将估算理论和心理物理学原理应用于高斯模糊退化的图像,会产生一种所谓的典型 IQA 方法,其估算值与主观分数和观看距离均呈线性相关。然后,我们证明,任何主流的 IQA 方法都可以根据唯一的样本图像转换其度量标准,从而重新导入典型方法。所提出的方案可扩展到多种劣化图像,如噪声图像和压缩图像。由此产生的免校准 FR IQA 方法可在不同成像系统和不同观察距离上进行比较和互操作。最后提供了与最先进的易校准方法的统计性能比较,表明所提出的模型是 IQA 方法最后 5 参数校准步骤的有效替代方案,模型的两个参数具有明确的操作意义,在实际应用中只需简单确定。通过独立地重新调整与每个距离相关的模糊值,在多个视距数据库中实现了增强的性能。
{"title":"Full-reference calibration-free image quality assessment","authors":"Paolo Giannitrapani,&nbsp;Elio D. Di Claudio ,&nbsp;Giovanni Jacovitti","doi":"10.1016/j.image.2024.117212","DOIUrl":"10.1016/j.image.2024.117212","url":null,"abstract":"<div><div>Objective Image Quality Assessment (IQA) methods often lack of linearity of their quality estimates with respect to scores expressed by human subjects and therefore IQA metrics undergo a calibration process based on subjective quality examples. However, example-based training presents a challenge in terms of generalization hampering result comparison across different applications and operative conditions. In this paper, new Full Reference (FR) techniques, providing estimates linearly correlated with human scores without using calibration are introduced. We show that on natural images, application of estimation theory and psychophysical principles to images degraded by Gaussian blur leads to a so-called canonical IQA method, whose estimates are linearly correlated to both the subjective scores and the viewing distance. Then, we show that any mainstream IQA methods can be reconducted to the canonical method by converting its metric based on a unique specimen image. The proposed scheme is extended to wide classes of degraded images, e.g. noisy and compressed images. The resulting calibration-free FR IQA methods allows for comparability and interoperability across different imaging systems and on different viewing distances. A comparison of their statistical performance with respect to state-of-the-art calibration prone methods is finally provided, showing that the presented model is a valid alternative to the final 5-parameter calibration step of IQA methods, and the two parameters of the model have a clear operational meaning and are simply determined in practical applications. The enhanced performance are achieved across multiple viewing distance databases by independently realigning the blur values associated with each distance.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117212"},"PeriodicalIF":3.4,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved multi-focus image fusion using online convolutional sparse coding based on sample-dependent dictionary 利用基于样本依赖字典的在线卷积稀疏编码改进多焦图像融合
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-19 DOI: 10.1016/j.image.2024.117213
Sidi He , Chengfang Zhang , Haoyue Li , Ziliang Feng
Multi-focus image fusion merges multiple images captured from different focused regions of a scene to create a fully-focused image. Convolutional sparse coding (CSC) methods are commonly employed for accurate extraction of focused regions, but they often disregard computational costs. To overcome this, an online convolutional sparse coding (OCSC) technique was introduced, but its performance is still limited by the number of filters used, affecting overall performance negatively. To address these limitations, a novel approach called Sample-Dependent Dictionary-based Online Convolutional Sparse Coding (SCSC) was proposed. SCSC enables the utilization of additional filters while maintaining low time and space complexity for processing high-dimensional or large data. Leveraging the computational efficiency and effective global feature extraction of SCSC, we propose a novel method for multi-focus image fusion. Our method involves a two-layer decomposition of each source image, yielding a base layer capturing the predominant features and a detail layer containing finer details. The amalgamation of the fused base and detail layers culminates in the reconstruction of the final image. The proposed method significantly mitigates artifacts, preserves fine details at the focus boundary, and demonstrates notable enhancements in both visual quality and objective evaluation of multi-focus image fusion.
多焦点图像融合将从场景的不同焦点区域捕捉到的多幅图像合并在一起,生成一幅全焦点图像。卷积稀疏编码(CSC)方法通常用于精确提取聚焦区域,但它们往往不考虑计算成本。为了克服这一问题,人们引入了在线卷积稀疏编码(OCSC)技术,但其性能仍然受到所用滤波器数量的限制,从而对整体性能产生负面影响。为了解决这些限制,有人提出了一种称为基于采样依赖字典的在线卷积稀疏编码(SCSC)的新方法。SCSC 可以利用额外的滤波器,同时保持较低的时间和空间复杂度,以处理高维或大型数据。利用 SCSC 的计算效率和有效的全局特征提取,我们提出了一种用于多焦点图像融合的新方法。我们的方法包括对每幅源图像进行两层分解,产生一个捕捉主要特征的基础层和一个包含更精细细节的细节层。融合后的基础层和细节层最终重建出最终图像。所提出的方法大大减少了伪影,保留了焦点边界的精细细节,在视觉质量和多焦点图像融合的客观评估方面都有明显的改进。
{"title":"Improved multi-focus image fusion using online convolutional sparse coding based on sample-dependent dictionary","authors":"Sidi He ,&nbsp;Chengfang Zhang ,&nbsp;Haoyue Li ,&nbsp;Ziliang Feng","doi":"10.1016/j.image.2024.117213","DOIUrl":"10.1016/j.image.2024.117213","url":null,"abstract":"<div><div>Multi-focus image fusion merges multiple images captured from different focused regions of a scene to create a fully-focused image. Convolutional sparse coding (CSC) methods are commonly employed for accurate extraction of focused regions, but they often disregard computational costs. To overcome this, an online convolutional sparse coding (OCSC) technique was introduced, but its performance is still limited by the number of filters used, affecting overall performance negatively. To address these limitations, a novel approach called Sample-Dependent Dictionary-based Online Convolutional Sparse Coding (SCSC) was proposed. SCSC enables the utilization of additional filters while maintaining low time and space complexity for processing high-dimensional or large data. Leveraging the computational efficiency and effective global feature extraction of SCSC, we propose a novel method for multi-focus image fusion. Our method involves a two-layer decomposition of each source image, yielding a base layer capturing the predominant features and a detail layer containing finer details. The amalgamation of the fused base and detail layers culminates in the reconstruction of the final image. The proposed method significantly mitigates artifacts, preserves fine details at the focus boundary, and demonstrates notable enhancements in both visual quality and objective evaluation of multi-focus image fusion.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117213"},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SynFlowMap: A synchronized optical flow remapping for video motion magnification SynFlowMap:用于视频运动放大的同步光流重映射
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-18 DOI: 10.1016/j.image.2024.117203
Jonathan A.S. Lima , Cristiano J. Miosso , Mylène C.Q. Farias
Motion magnification refers to the process of spatially amplifying small movements in a video to reveal important information about a scene. Several motion magnification methods have been proposed, but most of them introduce perceptible and annoying visual artifacts. In this paper, we propose a method that first analyzes the optical flow between the original frame and the corresponding frames, which are motion-magnified with other methods. Then, the method uses the generated optical flow map and the original video to synthesize a combined motion-magnified video. The method is able to amplify the motion by larger values, invert the direction of the motion, and combine filtered motion from multiple frequencies and Eulerian methods. Amongst other advantages, the proposed approach eliminates artifacts caused by Eulerian motion-magnification methods. We present an extensive qualitative and quantitative analysis of the results compared to the main approaches for Eulerian methods. A final contribution of this work is a new video database for motion magnification that allows the evaluation of quantitative motion magnification.
运动放大是指对视频中的微小运动进行空间放大,以揭示场景的重要信息。目前已提出了几种运动放大方法,但大多数方法都会带来可感知的、恼人的视觉伪影。在本文中,我们提出了一种方法,首先分析原始帧和相应帧之间的光流,这些帧是用其他方法进行运动放大的。然后,该方法使用生成的光流图和原始视频合成综合运动放大视频。该方法能以更大的数值放大运动,反转运动方向,并能将来自多个频率和欧拉方法的滤波运动结合起来。除其他优点外,所提出的方法还能消除欧拉运动放大方法造成的伪影。与欧拉方法的主要方法相比,我们对结果进行了广泛的定性和定量分析。这项工作的最后一个贡献是建立了一个新的运动放大视频数据库,可以对定量运动放大进行评估。
{"title":"SynFlowMap: A synchronized optical flow remapping for video motion magnification","authors":"Jonathan A.S. Lima ,&nbsp;Cristiano J. Miosso ,&nbsp;Mylène C.Q. Farias","doi":"10.1016/j.image.2024.117203","DOIUrl":"10.1016/j.image.2024.117203","url":null,"abstract":"<div><div>Motion magnification refers to the process of spatially amplifying small movements in a video to reveal important information about a scene. Several motion magnification methods have been proposed, but most of them introduce perceptible and annoying visual artifacts. In this paper, we propose a method that first analyzes the optical flow between the original frame and the corresponding frames, which are motion-magnified with other methods. Then, the method uses the generated optical flow map and the original video to synthesize a combined motion-magnified video. The method is able to amplify the motion by larger values, invert the direction of the motion, and combine filtered motion from multiple frequencies and Eulerian methods. Amongst other advantages, the proposed approach eliminates artifacts caused by Eulerian motion-magnification methods. We present an extensive qualitative and quantitative analysis of the results compared to the main approaches for Eulerian methods. A final contribution of this work is a new video database for motion magnification that allows the evaluation of quantitative motion magnification.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117203"},"PeriodicalIF":3.4,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed virtual selective-forwarding units and SDN-assisted edge computing for optimization of multi-party WebRTC videoconferencing 分布式虚拟选择性转发单元和 SDN 辅助边缘计算优化多方 WebRTC 视频会议
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-12 DOI: 10.1016/j.image.2024.117173
R. Arda Kırmızıoğlu , A. Murat Tekalp , Burak Görkemli
Network service providers (NSP) have growing interest in placing network intelligence and services at network edges by deploying software-defined network (SDN) and network function virtualization infrastructure. In multi-party WebRTC videoconferencing using scalable video coding, a selective forwarding unit (SFU) provides connectivity between peers with heterogeneous bandwidth and terminals. An important question is where in the network to place the SFU service in order to minimize end-to-end delay between all pairs of peers. Clearly, there is no single optimal place for a cloud SFU for all possible peer locations. We propose placing virtual SFUs at network edges leveraging NSP edge datacenters to optimize end-to-end delay and usage of overall network resources. The main advantage of the distributed edge-SFU framework is that each peer video stream travels the shortest path to reach other peers similar to mesh connection model, whereas each peer uploads a single stream to its edge-SFU avoiding the upload bottleneck. While the proposed distributed edge-SFU framework applies to both best-effort and managed service models, this paper proposes a premium managed, edge-integrated multi-party WebRTC service architecture with bandwidth and delay guarantees within access networks by SDN-assisted slicing of edge networks. The performance of the proposed distributed edge-SFU service architecture is demonstrated by means of experimental results.
网络服务提供商(NSP)对通过部署软件定义网络(SDN)和网络功能虚拟化基础设施在网络边缘部署网络智能和服务的兴趣与日俱增。在使用可扩展视频编码的多方 WebRTC 视频会议中,选择性转发单元(SFU)在带宽和终端异构的对等方之间提供连接。一个重要的问题是在网络的哪个位置放置 SFU 服务,以尽量减少所有对等点之间的端到端延迟。显然,对于所有可能的对等点位置,云 SFU 没有一个最佳位置。我们建议利用 NSP 边缘数据中心在网络边缘放置虚拟 SFU,以优化端到端延迟和整体网络资源的使用。分布式边缘-SFU 框架的主要优势在于,每个对等点的视频流通过最短路径到达其他对等点,类似于网状连接模型,而每个对等点向其边缘-SFU 上传单个视频流,避免了上传瓶颈。本文提出的分布式边缘-SFU 框架同时适用于尽力而为和托管服务模式,并通过 SDN 辅助的边缘网络切片,在接入网内提出了一种具有带宽和延迟保证的优质托管边缘集成多方 WebRTC 服务架构。实验结果证明了所提出的分布式边缘-SFU 服务架构的性能。
{"title":"Distributed virtual selective-forwarding units and SDN-assisted edge computing for optimization of multi-party WebRTC videoconferencing","authors":"R. Arda Kırmızıoğlu ,&nbsp;A. Murat Tekalp ,&nbsp;Burak Görkemli","doi":"10.1016/j.image.2024.117173","DOIUrl":"10.1016/j.image.2024.117173","url":null,"abstract":"<div><div>Network service providers (NSP) have growing interest in placing network intelligence and services at network edges by deploying software-defined network (SDN) and network function virtualization infrastructure. In multi-party WebRTC videoconferencing using scalable video coding, a selective forwarding unit (SFU) provides connectivity between peers with heterogeneous bandwidth and terminals. An important question is where in the network to place the SFU service in order to minimize end-to-end delay between all pairs of peers. Clearly, there is no single optimal place for a cloud SFU for all possible peer locations. We propose placing virtual SFUs at network edges leveraging NSP edge datacenters to optimize end-to-end delay and usage of overall network resources. The main advantage of the distributed edge-SFU framework is that each peer video stream travels the shortest path to reach other peers similar to mesh connection model, whereas each peer uploads a single stream to its edge-SFU avoiding the upload bottleneck. While the proposed distributed edge-SFU framework applies to both best-effort and managed service models, this paper proposes a premium managed, edge-integrated multi-party WebRTC service architecture with bandwidth and delay guarantees within access networks by SDN-assisted slicing of edge networks. The performance of the proposed distributed edge-SFU service architecture is demonstrated by means of experimental results.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117173"},"PeriodicalIF":3.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modulated deformable convolution based on graph convolution network for rail surface crack detection 基于图卷积网络的调制变形卷积用于轨道表面裂纹检测
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-10 DOI: 10.1016/j.image.2024.117202
Shuzhen Tong , Qing Wang , Xuan Wei , Cheng Lu , Xiaobo Lu

Accurate detection of rail surface cracks is essential but also tricky because of the noise, low contrast, and density inhomogeneity. In this paper, to deal with the complex situations in rail surface crack detection, we propose modulated deformable convolution based on a graph convolution network named MDCGCN. The MDCGCN is a novel convolution that calculates the offsets and modulation scalars of the modulated deformable convolution by conducting the graph convolution network on a feature map. The MDCGCN improves the performance of different networks in rail surface crack detection, harming the inference speed slightly. Finally, we demonstrate our methods’ numerical accuracy, computational efficiency, and effectiveness on the public segmentation dataset RSDD and our self-built detection dataset SEU-RSCD and explore an appropriate network structure in the baseline network UNet with the MDCGCN.

轨道表面裂纹的精确检测非常重要,但由于噪声、低对比度和密度不均匀性等原因,检测也非常棘手。本文针对轨道表面裂纹检测中的复杂情况,提出了基于图卷积网络(MDCGCN)的调制可变形卷积。MDCGCN 是一种新型卷积,通过在特征图上进行图卷积网络计算调制变形卷积的偏移和调制标量。MDCGCN 提高了不同网络在轨道表面裂纹检测中的性能,但对推理速度略有损害。最后,我们在公共分割数据集 RSDD 和自建检测数据集 SEU-RSCD 上证明了我们的方法的数值精度、计算效率和有效性,并探索了基线网络 UNet 与 MDCGCN 的适当网络结构。
{"title":"Modulated deformable convolution based on graph convolution network for rail surface crack detection","authors":"Shuzhen Tong ,&nbsp;Qing Wang ,&nbsp;Xuan Wei ,&nbsp;Cheng Lu ,&nbsp;Xiaobo Lu","doi":"10.1016/j.image.2024.117202","DOIUrl":"10.1016/j.image.2024.117202","url":null,"abstract":"<div><p>Accurate detection of rail surface cracks is essential but also tricky because of the noise, low contrast, and density inhomogeneity. In this paper, to deal with the complex situations in rail surface crack detection, we propose modulated deformable convolution based on a graph convolution network named MDCGCN. The MDCGCN is a novel convolution that calculates the offsets and modulation scalars of the modulated deformable convolution by conducting the graph convolution network on a feature map. The MDCGCN improves the performance of different networks in rail surface crack detection, harming the inference speed slightly. Finally, we demonstrate our methods’ numerical accuracy, computational efficiency, and effectiveness on the public segmentation dataset RSDD and our self-built detection dataset SEU-RSCD and explore an appropriate network structure in the baseline network UNet with the MDCGCN.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117202"},"PeriodicalIF":3.4,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A global reweighting approach for cross-domain semantic segmentation 跨域语义分割的全局再加权方法
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-07 DOI: 10.1016/j.image.2024.117197
Yuhang Zhang , Shishun Tian , Muxin Liao , Guoguang Hua , Wenbin Zou , Chen Xu
Unsupervised domain adaptation semantic segmentation attracts much research attention due to the expensive pixel-level annotation cost. Since the adaptation difficulty of samples is different, the weight of samples should be set independently, which is called reweighting. However, existing reweighting methods only calculate local reweighting information from predicted results or context information in batch images of two domains, which may lead to over-alignment or under-alignment problems. To handle this issue, we propose a global reweighting approach. Specifically, we first define the target centroid distance, which describes the distance between the source batch data and the target centroid. Then, we employ a Fréchet Inception Distance metric to evaluate the domain divergence and embed it into the target centroid distance. Finally, a global reweighting strategy is proposed to enhance the knowledge transferability in the source domain supervision. Extensive experiments demonstrate that our approach achieves competitive performance and helps to improve performance in other methods.
由于像素级标注成本昂贵,无监督领域自适应语义分割备受研究关注。由于样本的适配难度不同,因此需要独立设置样本的权重,这就是所谓的重新加权。然而,现有的重新加权方法只是根据预测结果或两个领域批量图像中的上下文信息计算局部重新加权信息,这可能会导致过对齐或欠对齐问题。为了解决这个问题,我们提出了一种全局再加权方法。具体来说,我们首先定义目标中心点距离,它描述了源批次数据与目标中心点之间的距离。然后,我们采用弗雷谢特起始距离度量来评估域分歧,并将其嵌入目标中心点距离中。最后,我们提出了一种全局重权策略,以增强源领域监督中的知识可转移性。广泛的实验证明,我们的方法取得了具有竞争力的性能,并有助于提高其他方法的性能。
{"title":"A global reweighting approach for cross-domain semantic segmentation","authors":"Yuhang Zhang ,&nbsp;Shishun Tian ,&nbsp;Muxin Liao ,&nbsp;Guoguang Hua ,&nbsp;Wenbin Zou ,&nbsp;Chen Xu","doi":"10.1016/j.image.2024.117197","DOIUrl":"10.1016/j.image.2024.117197","url":null,"abstract":"<div><div>Unsupervised domain adaptation semantic segmentation attracts much research attention due to the expensive pixel-level annotation cost. Since the adaptation difficulty of samples is different, the weight of samples should be set independently, which is called reweighting. However, existing reweighting methods only calculate local reweighting information from predicted results or context information in batch images of two domains, which may lead to over-alignment or under-alignment problems. To handle this issue, we propose a global reweighting approach. Specifically, we first define the target centroid distance, which describes the distance between the source batch data and the target centroid. Then, we employ a Fréchet Inception Distance metric to evaluate the domain divergence and embed it into the target centroid distance. Finally, a global reweighting strategy is proposed to enhance the knowledge transferability in the source domain supervision. Extensive experiments demonstrate that our approach achieves competitive performance and helps to improve performance in other methods.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117197"},"PeriodicalIF":3.4,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142359027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory positional encoding for image captioning 图像字幕的记忆位置编码
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-07 DOI: 10.1016/j.image.2024.117201
Xiaobao Yang , Shuai He , Jie Zhang , Sugang Ma , Zhiqiang Hou , Wei Sun

Transformer-based architectures represent the state-of-the-art in image captioning. Due to its natural parallel internal structure, it cannot be aware of the order of inputting tokens, so the positional encoding becomes an indispensable component of Transformer-based models. However, most of the existing absolute positional encodings (APE) have certain limitations for image captioning. Their spatial positional features are predefined and cannot been well generalized to other forms of data, such as visual data. Meanwhile, each positional features are decoupled from each other and lack internal correlation, therefore which affects the accuracy of spatial position context representation of visual or text semantic to a certain extent. Therefore, we propose a memory positional encoding (MPE), which has generalization ability that can be applied to both the visual encoder and the sequence decoder of the image captioning models. In MPE, each positional feature is recursively generated by the learnable network with memory function, making the current generated positional features effectively inherit the genetic information of the previous n positions. In addition, existing positional encodings provide positional features with fixed value and scale, that means, they provide the same positional encoding for different inputs, which is unreasonable. Thus, to address the previous issues of scale and value of current positional encoding methods in practical applications, we further explore dynamic memory positional encoding (DMPE) based on MPE. DMPE dynamically adjusts and generates positional features based on different input to provide them with unique positional representation. Extensive experiments on the MSCOCO validate the effectiveness of MPE and DMPE.

基于变换器的架构代表了图像标题处理的最先进水平。由于其天然的并行内部结构,它无法感知输入标记的顺序,因此位置编码成为基于变换器的模型不可或缺的组成部分。然而,大多数现有的绝对位置编码(APE)在图像字幕方面都有一定的局限性。它们的空间位置特征是预定义的,不能很好地推广到其他形式的数据,如视觉数据。同时,各位置特征之间相互解耦,缺乏内部关联性,因此在一定程度上影响了视觉或文本语义的空间位置上下文表示的准确性。因此,我们提出了一种记忆位置编码(MPE),它具有通用性,可同时应用于图像字幕模型的视觉编码器和序列解码器。在 MPE 中,每个位置特征都是由具有记忆功能的可学习网络递归生成的,从而使当前生成的位置特征有效地继承了前 n 个位置的遗传信息。此外,现有的位置编码方法提供的位置特征具有固定的值和比例,也就是说,它们为不同的输入提供相同的位置编码,这是不合理的。因此,为了解决目前位置编码方法在实际应用中存在的标度和数值问题,我们进一步探索了基于 MPE 的动态存储器位置编码(DMPE)。DMPE 可根据不同的输入动态调整和生成位置特征,为其提供独特的位置表示。在 MSCOCO 上进行的大量实验验证了 MPE 和 DMPE 的有效性。
{"title":"Memory positional encoding for image captioning","authors":"Xiaobao Yang ,&nbsp;Shuai He ,&nbsp;Jie Zhang ,&nbsp;Sugang Ma ,&nbsp;Zhiqiang Hou ,&nbsp;Wei Sun","doi":"10.1016/j.image.2024.117201","DOIUrl":"10.1016/j.image.2024.117201","url":null,"abstract":"<div><p>Transformer-based architectures represent the state-of-the-art in image captioning. Due to its natural parallel internal structure, it cannot be aware of the order of inputting tokens, so the positional encoding becomes an indispensable component of Transformer-based models. However, most of the existing absolute positional encodings (APE) have certain limitations for image captioning. Their spatial positional features are predefined and cannot been well generalized to other forms of data, such as visual data. Meanwhile, each positional features are decoupled from each other and lack internal correlation, therefore which affects the accuracy of spatial position context representation of visual or text semantic to a certain extent. Therefore, we propose a memory positional encoding (MPE), which has generalization ability that can be applied to both the visual encoder and the sequence decoder of the image captioning models. In MPE, each positional feature is recursively generated by the learnable network with memory function, making the current generated positional features effectively inherit the genetic information of the previous <span><math><mi>n</mi></math></span> positions. In addition, existing positional encodings provide positional features with fixed value and scale, that means, they provide the same positional encoding for different inputs, which is unreasonable. Thus, to address the previous issues of scale and value of current positional encoding methods in practical applications, we further explore dynamic memory positional encoding (DMPE) based on MPE. DMPE dynamically adjusts and generates positional features based on different input to provide them with unique positional representation. Extensive experiments on the MSCOCO validate the effectiveness of MPE and DMPE.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117201"},"PeriodicalIF":3.4,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Style Optimization Networks for real-time semantic segmentation of rainy and foggy weather 用于雨雾天气实时语义分割的样式优化网络
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-07 DOI: 10.1016/j.image.2024.117199
Yifang Huang, Haitao He, Hongdou He, Guyu Zhao, Peng Shi, Pengpeng Fu
Semantic segmentation is an essential task in the field of computer vision. Existing semantic segmentation models can achieve good results under good weather and lighting conditions. However, when the external environment changes, the effectiveness of these models are seriously affected. Therefore, we focus on the task of semantic segmentation in rainy and foggy weather. Fog is a common phenomenon in rainy weather conditions and has a negative impact on image visibility. Besides, to make the algorithm satisfy the application requirements of mobile devices, the computational cost and the real-time requirement of the model have become one of the major points of our research. In this paper, we propose a novel Style Optimization Network (SONet) architecture, containing a Style Optimization Module (SOM) that can dynamically learn style information, and a Key information Extraction Module (KEM) that extracts important spatial and contextual information. This can improve the learning ability and robustness of the model for rainy and foggy conditions. Meanwhile, we achieve real-time performance by using lightweight modules and a backbone network with low computational complexity. To validate the effectiveness of our SONet, we synthesized CityScapes dataset for rainy and foggy weather and evaluated the accuracy and complexity of our model. Our model achieves a segmentation accuracy of 75.29% MIoU and 83.62% MPA on a NVIDIA TITAN Xp GPU. Several comparative experiments have shown that our SONet can achieve good performance in semantic segmentation tasks under rainy and foggy weather, and due to the lightweight design of the model we have a good advantage in both accuracy and model complexity.
语义分割是计算机视觉领域的一项重要任务。现有的语义分割模型可以在良好的天气和光照条件下取得良好的效果。然而,当外部环境发生变化时,这些模型的效果就会受到严重影响。因此,我们将重点放在雨雾天气下的语义分割任务上。雾是雨天的常见现象,对图像的可见度有负面影响。此外,为了使算法满足移动设备的应用要求,模型的计算成本和实时性要求也成为我们研究的重点之一。在本文中,我们提出了一种新颖的风格优化网络(SONet)架构,其中包含一个可动态学习风格信息的风格优化模块(SOM)和一个可提取重要空间和上下文信息的关键信息提取模块(KEM)。这可以提高模型的学习能力和在雨雾天气条件下的鲁棒性。同时,通过使用轻量级模块和计算复杂度较低的骨干网络,我们实现了实时性能。为了验证 SONet 的有效性,我们合成了雨雾天气的 CityScapes 数据集,并评估了模型的准确性和复杂性。在英伟达 TITAN Xp GPU 上,我们的模型达到了 75.29% MIoU 和 83.62% MPA 的分割准确率。多项对比实验表明,我们的 SONet 可以在雨雾天气下的语义分割任务中实现良好的性能,而且由于模型的轻量级设计,我们在准确率和模型复杂度方面都具有良好的优势。
{"title":"Style Optimization Networks for real-time semantic segmentation of rainy and foggy weather","authors":"Yifang Huang,&nbsp;Haitao He,&nbsp;Hongdou He,&nbsp;Guyu Zhao,&nbsp;Peng Shi,&nbsp;Pengpeng Fu","doi":"10.1016/j.image.2024.117199","DOIUrl":"10.1016/j.image.2024.117199","url":null,"abstract":"<div><div>Semantic segmentation is an essential task in the field of computer vision. Existing semantic segmentation models can achieve good results under good weather and lighting conditions. However, when the external environment changes, the effectiveness of these models are seriously affected. Therefore, we focus on the task of semantic segmentation in rainy and foggy weather. Fog is a common phenomenon in rainy weather conditions and has a negative impact on image visibility. Besides, to make the algorithm satisfy the application requirements of mobile devices, the computational cost and the real-time requirement of the model have become one of the major points of our research. In this paper, we propose a novel Style Optimization Network (SONet) architecture, containing a Style Optimization Module (SOM) that can dynamically learn style information, and a Key information Extraction Module (KEM) that extracts important spatial and contextual information. This can improve the learning ability and robustness of the model for rainy and foggy conditions. Meanwhile, we achieve real-time performance by using lightweight modules and a backbone network with low computational complexity. To validate the effectiveness of our SONet, we synthesized CityScapes dataset for rainy and foggy weather and evaluated the accuracy and complexity of our model. Our model achieves a segmentation accuracy of 75.29% MIoU and 83.62% MPA on a NVIDIA TITAN Xp GPU. Several comparative experiments have shown that our SONet can achieve good performance in semantic segmentation tasks under rainy and foggy weather, and due to the lightweight design of the model we have a good advantage in both accuracy and model complexity.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117199"},"PeriodicalIF":3.4,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142359025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1