首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
Text-based person search via fine-grained cross-modal semantic alignment 通过细粒度跨模态语义对齐的基于文本的人员搜索
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2026-01-05 DOI: 10.1016/j.image.2026.117478
Feng Chen , Jielong He , Yang Liu , Xiwen Qu
Existing text-based person search methods face challenges in handling complex cross-modal interactions, often failing to capture subtle semantic nuances. To address this, we propose a novel Fine-grained Cross-modal Semantic Alignment (FCSA) framework that enhances accuracy and robustness in text-based person search. FCSA introduces two key components: the Cross-Modal Reconstruction Strategy (CMRS) and the Saliency-Guided Masking Mechanism (SGMM). CMRS facilitates feature alignment by leveraging incomplete visual and textual features, promoting bidirectional reasoning across modalities, and enhancing fine-grained semantic understanding. SGMM further refines performance by dynamically focusing on salient visual patches and critical text tokens, thereby improving discriminative region perception and image–text matching precision. Our approach outperforms existing state-of-the-art methods, achieving mean Average Precision (mAP) scores of 69.72%, 43.78% and 48.78% on CUHK-PEDES, ICFG-PEDES, and RSTPReid, respectively. Source code is at https://github.com/flychen321/FCSA.
现有的基于文本的人物搜索方法在处理复杂的跨模态交互时面临挑战,往往无法捕捉细微的语义差别。为了解决这个问题,我们提出了一种新的细粒度跨模态语义对齐(FCSA)框架,该框架提高了基于文本的人物搜索的准确性和鲁棒性。FCSA引入了两个关键组件:跨模态重建策略(CMRS)和显著性引导掩蔽机制(SGMM)。CMRS通过利用不完整的视觉和文本特征、促进跨模态的双向推理和增强细粒度语义理解来促进特征对齐。SGMM通过动态关注显著的视觉斑块和关键文本标记进一步改进性能,从而提高区分区域感知和图像-文本匹配精度。我们的方法优于现有的最先进的方法,在CUHK-PEDES, ICFG-PEDES和RSTPReid上分别获得了69.72%,43.78%和48.78%的平均精度(mAP)分数。源代码在https://github.com/flychen321/FCSA。
{"title":"Text-based person search via fine-grained cross-modal semantic alignment","authors":"Feng Chen ,&nbsp;Jielong He ,&nbsp;Yang Liu ,&nbsp;Xiwen Qu","doi":"10.1016/j.image.2026.117478","DOIUrl":"10.1016/j.image.2026.117478","url":null,"abstract":"<div><div>Existing text-based person search methods face challenges in handling complex cross-modal interactions, often failing to capture subtle semantic nuances. To address this, we propose a novel Fine-grained Cross-modal Semantic Alignment (FCSA) framework that enhances accuracy and robustness in text-based person search. FCSA introduces two key components: the Cross-Modal Reconstruction Strategy (CMRS) and the Saliency-Guided Masking Mechanism (SGMM). CMRS facilitates feature alignment by leveraging incomplete visual and textual features, promoting bidirectional reasoning across modalities, and enhancing fine-grained semantic understanding. SGMM further refines performance by dynamically focusing on salient visual patches and critical text tokens, thereby improving discriminative region perception and image–text matching precision. Our approach outperforms existing state-of-the-art methods, achieving mean Average Precision (mAP) scores of 69.72%, 43.78% and 48.78% on CUHK-PEDES, ICFG-PEDES, and RSTPReid, respectively. Source code is at <span><span>https://github.com/flychen321/FCSA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117478"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced ISAR imaging of UAVs: Noise reduction via weighted atomic norm minimization and 2D-ADMM 增强无人机ISAR成像:加权原子范数最小化和2D-ADMM降噪
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2025-12-24 DOI: 10.1016/j.image.2025.117468
Mohammad Roueinfar, Mohammad Hossein Kahaei
The effect of noise on the Inverse Synthetic Aperture Radar (ISAR) with sparse apertures is challenging for image reconstruction with high resolution at low Signal-to-Noise Ratios (SNRs). It is well-known that the image resolution is affected by the bandwidth of the transmitted signal and the Coherent Processing Interval (CPI) in two dimensions, range and azimuth, respectively. To reduce the noise effect and thus increase the two-dimensional resolution of Unmanned Aerial Vehicles (UAVs) images, we propose the Fast Reweighted Atomic Norm Denoising (FRAND) algorithm by incorporating the weighted atomic norm minimization. To solve the problem, the Two-Dimensional Alternating Direction Method of Multipliers (2D-ADMM) algorithm is developed to speed up the implementation procedure. Assuming sparse apertures for ISAR images of UAVs, we compare the proposed method with the MUltiple SIgnal Classification (MUSIC), Cadzow, and SL0 methods in different SNRs. Simulation results show the superiority of FRAND at low SNRs based on the Mean-Square Error (MSE), Peak Signal-to-Noise ratio (PSNR), and Structural Similarity Index Measure (SSIM) criteria.
噪声对稀疏孔径逆合成孔径雷达(ISAR)的影响是低信噪比下高分辨率图像重建的挑战。众所周知,图像分辨率分别在距离和方位角两个维度上受到传输信号带宽和相干处理间隔(CPI)的影响。为了降低噪声影响,提高无人机图像的二维分辨率,提出了一种基于加权原子范数最小化的快速加权原子范数去噪(FRAND)算法。为了解决这一问题,提出了二维乘法器交替方向法(2D-ADMM)算法,加快了算法的实现速度。假设无人机ISAR图像孔径稀疏,在不同信噪比下,将该方法与多信号分类(MUSIC)、Cadzow和SL0方法进行比较。仿真结果表明,基于均方误差(MSE)、峰值信噪比(PSNR)和结构相似指数度量(SSIM)标准的FRAND在低信噪比下具有优势。
{"title":"Enhanced ISAR imaging of UAVs: Noise reduction via weighted atomic norm minimization and 2D-ADMM","authors":"Mohammad Roueinfar,&nbsp;Mohammad Hossein Kahaei","doi":"10.1016/j.image.2025.117468","DOIUrl":"10.1016/j.image.2025.117468","url":null,"abstract":"<div><div>The effect of noise on the Inverse Synthetic Aperture Radar (ISAR) with sparse apertures is challenging for image reconstruction with high resolution at low Signal-to-Noise Ratios (SNRs). It is well-known that the image resolution is affected by the bandwidth of the transmitted signal and the Coherent Processing Interval (CPI) in two dimensions, range and azimuth, respectively. To reduce the noise effect and thus increase the two-dimensional resolution of Unmanned Aerial Vehicles (UAVs) images, we propose the Fast Reweighted Atomic Norm Denoising (FRAND) algorithm by incorporating the weighted atomic norm minimization. To solve the problem, the Two-Dimensional Alternating Direction Method of Multipliers (2D-ADMM) algorithm is developed to speed up the implementation procedure. Assuming sparse apertures for ISAR images of UAVs, we compare the proposed method with the MUltiple SIgnal Classification (MUSIC), Cadzow, and <span><math><msub><mrow><mi>SL</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span> methods in different SNRs. Simulation results show the superiority of FRAND at low SNRs based on the Mean-Square Error (MSE), Peak Signal-to-Noise ratio (PSNR), and Structural Similarity Index Measure (SSIM) criteria.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117468"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video object segmentation based on feature compression and attention correction 基于特征压缩和注意校正的视频目标分割
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2025-12-24 DOI: 10.1016/j.image.2025.117456
Zhiqiang Hou, Jiale Dong, Chenxu Wang, Sugang Ma, Wangsheng Yu, Yuncheng Wang
The video object segmentation algorithm based on memory networks stores the information of the target object through the maintained external memory inventory. As the segmentation progresses, the size of the memory inventory will continue to increase, leading to redundancy of feature information and affecting the execution efficiency of the algorithm. In addition, the key value pairs stored in the memory library are subjected to channel dimension reduction using standard convolution, resulting in insufficient representation ability of target object features. In response to the above issues, this chapter proposes a video object segmentation algorithm based on feature compression and attention correction, constructing a reliable and effective memory library to ensure efficient storage and updating of target object information, thereby reducing computational complexity and storage consumption. A dual attention mechanism based on spatial and channel dimensions was proposed to correct feature information and enhance the representation ability of features. A large number of experiments have shown that the proposed algorithm demonstrates reliable competitiveness compared to other mainstream algorithms in recent years.
基于记忆网络的视频对象分割算法通过维护的外部存储器库存存储目标对象的信息。随着分割的进行,内存库存的大小会不断增加,导致特征信息的冗余,影响算法的执行效率。此外,存储在内存库中的键值对使用标准卷积进行通道降维,导致目标对象特征的表示能力不足。针对上述问题,本章提出了一种基于特征压缩和注意校正的视频对象分割算法,构建了一个可靠有效的内存库,保证了目标对象信息的高效存储和更新,从而降低了计算复杂度和存储消耗。提出了一种基于空间维度和通道维度的双注意机制来校正特征信息,增强特征表征能力。大量实验表明,与近年来的主流算法相比,本文提出的算法具有可靠的竞争力。
{"title":"Video object segmentation based on feature compression and attention correction","authors":"Zhiqiang Hou,&nbsp;Jiale Dong,&nbsp;Chenxu Wang,&nbsp;Sugang Ma,&nbsp;Wangsheng Yu,&nbsp;Yuncheng Wang","doi":"10.1016/j.image.2025.117456","DOIUrl":"10.1016/j.image.2025.117456","url":null,"abstract":"<div><div>The video object segmentation algorithm based on memory networks stores the information of the target object through the maintained external memory inventory. As the segmentation progresses, the size of the memory inventory will continue to increase, leading to redundancy of feature information and affecting the execution efficiency of the algorithm. In addition, the key value pairs stored in the memory library are subjected to channel dimension reduction using standard convolution, resulting in insufficient representation ability of target object features. In response to the above issues, this chapter proposes a video object segmentation algorithm based on feature compression and attention correction, constructing a reliable and effective memory library to ensure efficient storage and updating of target object information, thereby reducing computational complexity and storage consumption. A dual attention mechanism based on spatial and channel dimensions was proposed to correct feature information and enhance the representation ability of features. A large number of experiments have shown that the proposed algorithm demonstrates reliable competitiveness compared to other mainstream algorithms in recent years.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117456"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
U-MobileViT: A Lightweight Vision Transformer-based Backbone for Panoptic Driving Segmentation U-MobileViT:基于轻型视觉变压器的全光驾驶分割主干
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2025-12-23 DOI: 10.1016/j.image.2025.117461
Phuoc-Thinh Nguyen , The-Bang Nguyen , Phu Pham , Quang-Thinh Bui
Panoramic driving perception requires robust and efficient context understanding, which requires simultaneous semantic and instance segmentation. This paper proposes U-MobileViT, a lightweight backbone network designed to address this challenge. Our architecture combines the advantages of MobileViT, a family of Transformer-based models with high accuracy and fast processing speed, with the image segmentation structure of the U-Net model, facilitating multiscale feature fusion and accurate localization. U-MobileViT efficiently combines local and global spatial information by utilizing MobileViT Blocks with Separable-Attention layers, resulting in a computationally lightweight yet effective architecture, while the U-Net structure enables efficient integration of features from different levels of the hierarchy. This synergistic combination enables the generation of rich, context-aware feature maps that are critical for accurate panoramic segmentation. Through extensive experiments on the challenging BDD100K driving dataset, we demonstrate that U-MobileViT achieves state-of-the-art performance in panoramic driving perception, outperforming existing lightweight models in both accuracy and inference speed. Our results demonstrate the potential of U-MobileViT as a robust and efficient backbone for real-time panoramic scene understanding in autonomous driving applications. Code is available at https://github.com/quyongkeomut/UMobileViT.
全景驾驶感知需要鲁棒和高效的上下文理解,需要同时进行语义和实例分割。本文提出了U-MobileViT,一种轻量级骨干网络,旨在解决这一挑战。我们的架构将MobileViT(基于transformer的一系列模型,具有高精度和快速处理速度)的优势与U-Net模型的图像分割结构相结合,便于多尺度特征融合和精确定位。U-MobileViT通过利用MobileViT块和可分离的注意层有效地结合了本地和全局空间信息,从而产生了计算轻量级但有效的体系结构,而U-Net结构能够有效地集成来自不同层次结构的特征。这种协同组合能够生成丰富的、上下文感知的特征地图,这对于准确的全景分割至关重要。通过在具有挑战性的BDD100K驾驶数据集上进行的大量实验,我们证明U-MobileViT在全景驾驶感知方面实现了最先进的性能,在准确性和推理速度方面都优于现有的轻量级模型。我们的研究结果证明了U-MobileViT作为自动驾驶应用中实时全景场景理解的强大而高效的骨干的潜力。代码可从https://github.com/quyongkeomut/UMobileViT获得。
{"title":"U-MobileViT: A Lightweight Vision Transformer-based Backbone for Panoptic Driving Segmentation","authors":"Phuoc-Thinh Nguyen ,&nbsp;The-Bang Nguyen ,&nbsp;Phu Pham ,&nbsp;Quang-Thinh Bui","doi":"10.1016/j.image.2025.117461","DOIUrl":"10.1016/j.image.2025.117461","url":null,"abstract":"<div><div>Panoramic driving perception requires robust and efficient context understanding, which requires simultaneous semantic and instance segmentation. This paper proposes U-MobileViT, a lightweight backbone network designed to address this challenge. Our architecture combines the advantages of MobileViT, a family of Transformer-based models with high accuracy and fast processing speed, with the image segmentation structure of the U-Net model, facilitating multiscale feature fusion and accurate localization. U-MobileViT efficiently combines local and global spatial information by utilizing MobileViT Blocks with Separable-Attention layers, resulting in a computationally lightweight yet effective architecture, while the U-Net structure enables efficient integration of features from different levels of the hierarchy. This synergistic combination enables the generation of rich, context-aware feature maps that are critical for accurate panoramic segmentation. Through extensive experiments on the challenging BDD100K driving dataset, we demonstrate that U-MobileViT achieves state-of-the-art performance in panoramic driving perception, outperforming existing lightweight models in both accuracy and inference speed. Our results demonstrate the potential of U-MobileViT as a robust and efficient backbone for real-time panoramic scene understanding in autonomous driving applications. Code is available at <span><span>https://github.com/quyongkeomut/UMobileViT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117461"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UHW-former: U-shape hybrid transformer with wavelet-based multi-scale feature fusion for nighttime UAV tracking UHW-former:基于小波多尺度特征融合的u型混合变压器,用于夜间无人机跟踪
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2026-01-15 DOI: 10.1016/j.image.2026.117484
Haijun Wang, Haoyu Qu, Lihua Qi, Zihao Su
Most advancements in unmanned aerial vehicle (UAV) tracking have focused on daytime scenarios with optimal lighting conditions. However, the unpredictable and complex noise inherent in camera systems significantly impairs the effectiveness of UAV tracking algorithms, particularly in low-light environments. To address this challenge, we introduce a novel U-shaped plug-and-play denoising network that reduces cluttered and intricate real-world noise, thereby enhancing nighttime UAV tracking performance. Specifically, the U-shaped denoising network utilizes a CNN-Transformer block as the encoder, which incorporates hybrid attention to simultaneously capture both local details and global structures. Additionally, to further improve the denoising effect, we design a wavelet-based multi-scale feature fusion block that adaptively combines features from various stages of the encoding process. Finally, we develop a multi-feature collaboration decoder to fully integrate comprehensive features through multi-head transposed cross-attention. Extensive experiments demonstrate that the proposed UHW-former achieves remarkable denoising performance and significantly enhances nighttime UAV tracking.
无人机(UAV)跟踪的大多数进展都集中在具有最佳照明条件的白天场景。然而,相机系统中固有的不可预测和复杂的噪声显著地削弱了无人机跟踪算法的有效性,特别是在低光环境中。为了应对这一挑战,我们引入了一种新颖的u形即插即用去噪网络,减少了混乱和复杂的现实世界噪声,从而提高了夜间无人机的跟踪性能。具体来说,u形去噪网络利用CNN-Transformer块作为编码器,它结合了混合注意,同时捕获局部细节和全局结构。此外,为了进一步提高去噪效果,我们设计了一个基于小波的多尺度特征融合块,自适应地将编码过程中各个阶段的特征融合在一起。最后,我们开发了一个多特征协同解码器,通过多头转置交叉注意充分集成综合特征。大量实验表明,所提出的UHW-former具有良好的去噪性能,显著提高了无人机夜间跟踪能力。
{"title":"UHW-former: U-shape hybrid transformer with wavelet-based multi-scale feature fusion for nighttime UAV tracking","authors":"Haijun Wang,&nbsp;Haoyu Qu,&nbsp;Lihua Qi,&nbsp;Zihao Su","doi":"10.1016/j.image.2026.117484","DOIUrl":"10.1016/j.image.2026.117484","url":null,"abstract":"<div><div>Most advancements in unmanned aerial vehicle (UAV) tracking have focused on daytime scenarios with optimal lighting conditions. However, the unpredictable and complex noise inherent in camera systems significantly impairs the effectiveness of UAV tracking algorithms, particularly in low-light environments. To address this challenge, we introduce a novel U-shaped plug-and-play denoising network that reduces cluttered and intricate real-world noise, thereby enhancing nighttime UAV tracking performance. Specifically, the U-shaped denoising network utilizes a CNN-Transformer block as the encoder, which incorporates hybrid attention to simultaneously capture both local details and global structures. Additionally, to further improve the denoising effect, we design a wavelet-based multi-scale feature fusion block that adaptively combines features from various stages of the encoding process. Finally, we develop a multi-feature collaboration decoder to fully integrate comprehensive features through multi-head transposed cross-attention. Extensive experiments demonstrate that the proposed UHW-former achieves remarkable denoising performance and significantly enhances nighttime UAV tracking.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117484"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust coverless image steganography based on ring features and DWT sequence mapping 基于环特征和DWT序列映射的鲁棒无覆盖图像隐写
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2026-01-09 DOI: 10.1016/j.image.2026.117482
Chen-Yi Lin, Su-Ho Chiu
The widespread adoption of the Internet has enhanced communication between individuals but increased the risk of secret messages being intercepted, thereby drawing public attention to the security of message transmission. Image steganography has been a prominent area of research within the field secure communication technologies. However, traditional image steganography techniques risk being compromised by steganalysis tools, leading researchers to propose the concept of coverless image steganography. In recent years, numerous coverless image steganography techniques have been developed that effectively resist steganalysis tools. However, these techniques commonly suffer from incomplete mapping of secret messages, rendering them incapable of successfully concealing the information. Furthermore, most existing coverless steganography techniques rely on cryptographic methods to protect auxiliary information, which may raise suspicion and result in interception, thereby preventing the receiver from correctly recovering the secret messages. To address these issues, this study proposes a novel coverless image steganography technique based on ring features and discrete wavelet transform sequence mapping. This method generates feature sequences from both the spatial and frequency domains of images and employs an innovative stego image collage mechanism to transmit auxiliary information, thereby reducing the risk of interception. Experimental results demonstrate that the proposed technique significantly enhances the richness of feature sequences and the completeness of message mapping, achieving a 100 % success rate on medium- and large-scale image datasets. Moreover, the proposed method exhibits superior robustness even under conditions where existing techniques suffer from low mapping success rates or prolonged mapping times.
互联网的广泛采用加强了个人之间的交流,但也增加了秘密信息被截获的风险,从而引起公众对信息传输安全性的关注。图像隐写术一直是安全通信技术领域的一个重要研究领域。然而,传统的图像隐写技术有被隐写分析工具破坏的风险,导致研究人员提出了无覆盖图像隐写的概念。近年来,许多无覆盖图像隐写技术已经被开发出来,可以有效地抵抗隐写分析工具。然而,这些技术通常存在秘密消息映射不完整的问题,导致它们无法成功隐藏信息。此外,大多数现有的无覆盖隐写技术依赖于加密方法来保护辅助信息,这可能引起怀疑并导致拦截,从而使接收者无法正确恢复秘密信息。为了解决这些问题,本研究提出了一种基于环特征和离散小波变换序列映射的无覆盖图像隐写技术。该方法从图像的空间域和频域生成特征序列,并采用创新的隐写图像拼贴机制传输辅助信息,从而降低了被截获的风险。实验结果表明,该方法显著提高了特征序列的丰富度和消息映射的完备性,在大中型图像数据集上实现了100%的成功率。此外,即使在现有技术存在低映射成功率或长映射时间的情况下,所提出的方法也表现出优异的鲁棒性。
{"title":"Robust coverless image steganography based on ring features and DWT sequence mapping","authors":"Chen-Yi Lin,&nbsp;Su-Ho Chiu","doi":"10.1016/j.image.2026.117482","DOIUrl":"10.1016/j.image.2026.117482","url":null,"abstract":"<div><div>The widespread adoption of the Internet has enhanced communication between individuals but increased the risk of secret messages being intercepted, thereby drawing public attention to the security of message transmission. Image steganography has been a prominent area of research within the field secure communication technologies. However, traditional image steganography techniques risk being compromised by steganalysis tools, leading researchers to propose the concept of coverless image steganography. In recent years, numerous coverless image steganography techniques have been developed that effectively resist steganalysis tools. However, these techniques commonly suffer from incomplete mapping of secret messages, rendering them incapable of successfully concealing the information. Furthermore, most existing coverless steganography techniques rely on cryptographic methods to protect auxiliary information, which may raise suspicion and result in interception, thereby preventing the receiver from correctly recovering the secret messages. To address these issues, this study proposes a novel coverless image steganography technique based on ring features and discrete wavelet transform sequence mapping. This method generates feature sequences from both the spatial and frequency domains of images and employs an innovative stego image collage mechanism to transmit auxiliary information, thereby reducing the risk of interception. Experimental results demonstrate that the proposed technique significantly enhances the richness of feature sequences and the completeness of message mapping, achieving a 100 % success rate on medium- and large-scale image datasets. Moreover, the proposed method exhibits superior robustness even under conditions where existing techniques suffer from low mapping success rates or prolonged mapping times.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117482"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FlyAwareV2: A multimodal cross-domain UAV dataset for urban scene understanding FlyAwareV2:用于城市场景理解的多模态跨域无人机数据集
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2026-01-16 DOI: 10.1016/j.image.2026.117483
Francesco Barbato , Matteo Caligiuri , Pietro Zanuttigh
The development of computer vision algorithms for Unmanned Aerial Vehicle (UAV) applications in urban environments heavily relies on the availability of large-scale datasets with accurate annotations. However, collecting and annotating real-world UAV data is extremely challenging and costly. To address this limitation, we present FlyAwareV2, a novel multimodal dataset encompassing both real and synthetic UAV imagery tailored for urban scene understanding tasks. Building upon the recently introduced SynDrone and FlyAware datasets, FlyAwareV2 introduces several new key contributions: (1) Multimodal data (RGB, depth, semantic labels) across diverse environmental conditions including varying weather and daytime; (2) Depth maps for real samples computed via state-of-the-art monocular depth estimation; (3) Benchmarks for RGB and multimodal semantic segmentation on standard architectures; (4) Studies on synthetic-to-real domain adaptation to assess the generalization capabilities of models trained on the synthetic data. With its rich set of annotations and environmental diversity, FlyAwareV2 provides a valuable resource for research on UAV-based 3D urban scene understanding. Dataset link: https://medialab.dei.unipd.it/paper_data/FlyAwareV2
城市环境中无人机(UAV)应用的计算机视觉算法的发展严重依赖于具有准确注释的大规模数据集的可用性。然而,收集和注释真实世界的无人机数据是极具挑战性和昂贵的。为了解决这一限制,我们提出了FlyAwareV2,这是一个新的多模态数据集,包含为城市场景理解任务量身定制的真实和合成无人机图像。基于最近推出的SynDrone和FlyAware数据集,FlyAwareV2引入了几个新的关键贡献:(1)跨不同环境条件(包括不同天气和白天)的多模态数据(RGB,深度,语义标签);(2)通过最先进的单目深度估计计算的真实样本深度图;(3)标准架构下RGB和多模态语义分割的基准测试;(4)研究综合到实际的域自适应,以评估在综合数据上训练的模型的泛化能力。凭借其丰富的注释和环境多样性,FlyAwareV2为基于无人机的3D城市场景理解研究提供了宝贵的资源。数据集链接:https://medialab.dei.unipd.it/paper_data/FlyAwareV2
{"title":"FlyAwareV2: A multimodal cross-domain UAV dataset for urban scene understanding","authors":"Francesco Barbato ,&nbsp;Matteo Caligiuri ,&nbsp;Pietro Zanuttigh","doi":"10.1016/j.image.2026.117483","DOIUrl":"10.1016/j.image.2026.117483","url":null,"abstract":"<div><div>The development of computer vision algorithms for Unmanned Aerial Vehicle (UAV) applications in urban environments heavily relies on the availability of large-scale datasets with accurate annotations. However, collecting and annotating real-world UAV data is extremely challenging and costly. To address this limitation, we present FlyAwareV2, a novel multimodal dataset encompassing both real and synthetic UAV imagery tailored for urban scene understanding tasks. Building upon the recently introduced SynDrone and FlyAware datasets, FlyAwareV2 introduces several new key contributions: (1) Multimodal data (RGB, depth, semantic labels) across diverse environmental conditions including varying weather and daytime; (2) Depth maps for real samples computed via state-of-the-art monocular depth estimation; (3) Benchmarks for RGB and multimodal semantic segmentation on standard architectures; (4) Studies on synthetic-to-real domain adaptation to assess the generalization capabilities of models trained on the synthetic data. With its rich set of annotations and environmental diversity, FlyAwareV2 provides a valuable resource for research on UAV-based 3D urban scene understanding. <strong>Dataset link:</strong> <span><span>https://medialab.dei.unipd.it/paper_data/FlyAwareV2</span><svg><path></path></svg></span></div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117483"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning model with co-ordinated relationship for image captioning enabled via attentional language encoder-decoder 具有协调关系的深度学习模型,通过注意语言编码器-解码器实现图像字幕
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2025-12-23 DOI: 10.1016/j.image.2025.117466
Shaheen Raphiahmed Mujawar , Sridhar Iyer
The development of an image captioning system could make the world more accessible to persons who are blind. Recently, researchers have focused on the need to create automatic textual descriptions associated with observed images. However, in computer vision and natural language processing, autonomously creating captions for images is difficult. Hence, this article proposes an efficient automatic image caption with an attentional language encoder-decoder framework enabled by Deep Learning (DL) models. The developed model integrates four main strategies: the Feature Extractor Encoder Module (FEEM), the Co-ordinated Relationship Learning Module (CRLM), the Attentional Feature Fusion Module (AFFM), and the Language Decoder Module. The region and semantic-based feature extraction of the image can be ensured by utilizing the Res-Inception and Convolutional Neural Network (CNN) model. Moreover, CRLM is introduced to generate balanced relationship features, and AFFM is used to fuse various levels of visual information and selectively focus on particular visual regions associated with each word prediction. An Attentional Model with Residual BiGRU (ARBiGRU) is implemented as a language model for decoding to identify the correct caption for the input image effectively. The developed model utilizes the flickr8k and flickr30k datasets, respectively. To examine the achievement of the projected work, caption metrics such as BLEU, METEOR, CIDER, and ROUGE-L are used. To evaluate the effectiveness of the proposed model, an ablation study is conducted using six cases, and the performance analysis demonstrates that the proposed approach outpaces the existing techniques in caption generation.
图像字幕系统的开发可以使盲人更容易进入这个世界。最近,研究人员关注的是与观察到的图像相关联的自动文本描述的需求。然而,在计算机视觉和自然语言处理中,自主地为图像创建字幕是困难的。因此,本文提出了一种高效的自动图像标题,采用深度学习(DL)模型支持的注意力语言编码器-解码器框架。该模型集成了四种主要策略:特征提取编码器模块(FEEM)、协调关系学习模块(CRLM)、注意特征融合模块(AFFM)和语言解码器模块。利用re - inception和卷积神经网络(CNN)模型可以保证图像的区域和基于语义的特征提取。此外,引入CRLM生成平衡关系特征,使用AFFM融合不同层次的视觉信息,并有选择地关注与每个单词预测相关的特定视觉区域。为了有效识别输入图像的正确标题,实现了带有残差BiGRU的注意力模型(ARBiGRU)作为解码的语言模型。所建立的模型分别使用了flickr8k和flickr30k数据集。为了检查预期工作的成果,使用了诸如BLEU、METEOR、CIDER和ROUGE-L之类的标题度量。为了评估所提出模型的有效性,使用六个案例进行了消融研究,性能分析表明,所提出的方法在标题生成方面优于现有技术。
{"title":"Deep learning model with co-ordinated relationship for image captioning enabled via attentional language encoder-decoder","authors":"Shaheen Raphiahmed Mujawar ,&nbsp;Sridhar Iyer","doi":"10.1016/j.image.2025.117466","DOIUrl":"10.1016/j.image.2025.117466","url":null,"abstract":"<div><div>The development of an image captioning system could make the world more accessible to persons who are blind. Recently, researchers have focused on the need to create automatic textual descriptions associated with observed images. However, in computer vision and natural language processing, autonomously creating captions for images is difficult. Hence, this article proposes an efficient automatic image caption with an attentional language encoder-decoder framework enabled by Deep Learning (DL) models. The developed model integrates four main strategies: the Feature Extractor Encoder Module (FEEM), the Co-ordinated Relationship Learning Module (CRLM), the Attentional Feature Fusion Module (AFFM), and the Language Decoder Module. The region and semantic-based feature extraction of the image can be ensured by utilizing the Res-Inception and Convolutional Neural Network (CNN) model. Moreover, CRLM is introduced to generate balanced relationship features, and AFFM is used to fuse various levels of visual information and selectively focus on particular visual regions associated with each word prediction. An Attentional Model with Residual BiGRU (ARBiGRU) is implemented as a language model for decoding to identify the correct caption for the input image effectively. The developed model utilizes the flickr8k and flickr30k datasets, respectively. To examine the achievement of the projected work, caption metrics such as BLEU, METEOR, CIDER, and ROUGE-L are used. To evaluate the effectiveness of the proposed model, an ablation study is conducted using six cases, and the performance analysis demonstrates that the proposed approach outpaces the existing techniques in caption generation.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117466"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UW-SDE: Multi-scale prompt feature guided diffusion model for underwater image enhancement UW-SDE:用于水下图像增强的多尺度提示特征引导扩散模型
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2026-01-14 DOI: 10.1016/j.image.2026.117486
Jiaxi Li, Junjun Wu, Qinghua Lu, Ningwei Qin, Shuhong Zhou, Weijian Li
In recent years, diffusion models have achieved remarkable performance in the field of image generation and have been widely applied, with their potential in image enhancement tasks gradually being unearthed. However, when applied to underwater scenes, diffusion models for general image restoration struggle to achieve their expected performance. This is due to the scattering and absorption of light in underwater environments, resulting in underwater images suffering from color distortion, low contrast, and haziness. These issues often co-occur within a single underwater image, making the task of underwater image enhancement more challenging than typical image enhancement tasks. To better adapt diffusion models for underwater image enhancement, this paper proposes an underwater image enhancement method based on latent diffusion model. The proposed model’s latent encoder progressively mitigates adverse degradation factors embedded within the hidden layers, while preserving essential image feature information in the latent representation, thus enabling a smoother diffusion process. Additionally, we design a gated fusion network that integrates guiding features at multiple scales, steering the network towards diffusion with superior visual quality restoration. A series of qualitative and quantitative experiments conducted on various real-world underwater image datasets demonstrate that our proposed method outperforms recent state-of-the-art methods in terms of visual effects and generalization capabilities, proving the effectiveness of our approach in applying diffusion model to underwater enhancement tasks.
近年来,扩散模型在图像生成领域取得了显著的成绩,得到了广泛的应用,其在图像增强任务中的潜力也逐渐被挖掘出来。然而,当应用于水下场景时,用于一般图像恢复的扩散模型很难达到预期的效果。这是由于光线在水下环境中的散射和吸收,导致水下图像遭受色彩失真,低对比度和模糊。这些问题往往同时出现在单个水下图像中,使得水下图像增强任务比典型的图像增强任务更具挑战性。为了更好地适应扩散模型对水下图像的增强,本文提出了一种基于潜扩散模型的水下图像增强方法。该模型的潜在编码器逐步减轻嵌入在隐藏层中的不利退化因素,同时在潜在表示中保留基本的图像特征信息,从而实现更平滑的扩散过程。此外,我们设计了一个门控融合网络,该网络集成了多个尺度的引导特征,使网络向扩散方向发展,并具有卓越的视觉质量恢复。在各种真实世界水下图像数据集上进行的一系列定性和定量实验表明,我们提出的方法在视觉效果和泛化能力方面优于最近最先进的方法,证明了我们的方法在将扩散模型应用于水下增强任务方面的有效性。
{"title":"UW-SDE: Multi-scale prompt feature guided diffusion model for underwater image enhancement","authors":"Jiaxi Li,&nbsp;Junjun Wu,&nbsp;Qinghua Lu,&nbsp;Ningwei Qin,&nbsp;Shuhong Zhou,&nbsp;Weijian Li","doi":"10.1016/j.image.2026.117486","DOIUrl":"10.1016/j.image.2026.117486","url":null,"abstract":"<div><div>In recent years, diffusion models have achieved remarkable performance in the field of image generation and have been widely applied, with their potential in image enhancement tasks gradually being unearthed. However, when applied to underwater scenes, diffusion models for general image restoration struggle to achieve their expected performance. This is due to the scattering and absorption of light in underwater environments, resulting in underwater images suffering from color distortion, low contrast, and haziness. These issues often co-occur within a single underwater image, making the task of underwater image enhancement more challenging than typical image enhancement tasks. To better adapt diffusion models for underwater image enhancement, this paper proposes an underwater image enhancement method based on latent diffusion model. The proposed model’s latent encoder progressively mitigates adverse degradation factors embedded within the hidden layers, while preserving essential image feature information in the latent representation, thus enabling a smoother diffusion process. Additionally, we design a gated fusion network that integrates guiding features at multiple scales, steering the network towards diffusion with superior visual quality restoration. A series of qualitative and quantitative experiments conducted on various real-world underwater image datasets demonstrate that our proposed method outperforms recent state-of-the-art methods in terms of visual effects and generalization capabilities, proving the effectiveness of our approach in applying diffusion model to underwater enhancement tasks.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117486"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learnable token for visual tracking 可学习的标记视觉跟踪
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-03-01 Epub Date: 2025-12-23 DOI: 10.1016/j.image.2025.117465
Yan Chen , Zhongkang Jiang , Jixiang Du , Hongbo Zhang
High-quality fusion of template and search frames is essential for effective visual object tracking. However, mainstream Transformer-based trackers, whether dual-stream or single-stream, often fuse these frames indiscriminately, allowing background noise to disrupt target-specific feature extraction. To address this, we propose LTTrack(learnable token for visual tracking), an adaptive feature fusion method based on a Transformer architecture with an autoregressive encoder–decoder structure. The core innovation is a learnable token in the encoder, which processes three inputs: search tokens, template tokens, and the learnable token. This token is designed to interact with the template, enabling precise fusion and extraction of target-relevant features. Our approach adaptively fuses search and template tokens, and extensive experiments show LTTrack achieves state-of-the-art performance across six challenging benchmarks.
模板和搜索框架的高质量融合对于有效的视觉目标跟踪至关重要。然而,主流的基于transformer的跟踪器,无论是双流还是单流,经常不加区分地融合这些帧,允许背景噪声干扰目标特定的特征提取。为了解决这个问题,我们提出了LTTrack(用于视觉跟踪的可学习标记),这是一种基于Transformer架构的自适应特征融合方法,具有自回归编码器-解码器结构。核心创新是编码器中的可学习令牌,它处理三个输入:搜索令牌,模板令牌和可学习令牌。该令牌设计用于与模板交互,从而实现目标相关特征的精确融合和提取。我们的方法自适应地融合了搜索和模板令牌,大量的实验表明,LTTrack在六个具有挑战性的基准测试中实现了最先进的性能。
{"title":"Learnable token for visual tracking","authors":"Yan Chen ,&nbsp;Zhongkang Jiang ,&nbsp;Jixiang Du ,&nbsp;Hongbo Zhang","doi":"10.1016/j.image.2025.117465","DOIUrl":"10.1016/j.image.2025.117465","url":null,"abstract":"<div><div>High-quality fusion of template and search frames is essential for effective visual object tracking. However, mainstream Transformer-based trackers, whether dual-stream or single-stream, often fuse these frames indiscriminately, allowing background noise to disrupt target-specific feature extraction. To address this, we propose LTTrack(learnable token for visual tracking), an adaptive feature fusion method based on a Transformer architecture with an autoregressive encoder–decoder structure. The core innovation is a learnable token in the encoder, which processes three inputs: search tokens, template tokens, and the learnable token. This token is designed to interact with the template, enabling precise fusion and extraction of target-relevant features. Our approach adaptively fuses search and template tokens, and extensive experiments show LTTrack achieves state-of-the-art performance across six challenging benchmarks.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117465"},"PeriodicalIF":2.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1