首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
Markerless emotion recognition from full-body movements for Social XR 社交XR的全身动作无标记情感识别
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-21 DOI: 10.1016/j.image.2026.117489
Michael Neri , Sara Baldoni , Marco Carli , Federica Battisti
In this work, an emotion recognition system for enhancing social XR applications is presented. Although several techniques for emotion recognition have been proposed in the literature, they either require invasive and advanced equipment or exploit facial expressions, speech excerpts, physiological data, and text. In this contribution, on the contrary, an approach for markerless emotion classification through body language is designed. More specifically, human movements are analyzed over time by extracting the skeleton joints in videos acquired by consumer cameras. A normalization procedure has been introduced to provide a depth-independent skeleton representation without distorting the skeleton shape. The performance of the proposed method have been assessed using a dataset of videos recorded from multiple points of view. An ad-hoc learning-based emotion classifier has been trained to recognize four emotions (happiness, boredom, interest, and disgust) achieving an average accuracy of 72.5%. The pre-processed dataset, code, and demo with pre-trained models are available at https://github.com/michaelneri/emotion-recognition-human-movements.
在这项工作中,提出了一个增强社交XR应用的情感识别系统。虽然在文献中提出了几种情绪识别技术,但它们要么需要侵入性和先进的设备,要么利用面部表情、语音摘录、生理数据和文本。相反,本文设计了一种通过肢体语言进行无标记情感分类的方法。更具体地说,通过从消费者相机获取的视频中提取骨骼关节,随着时间的推移分析人类的运动。引入了一种归一化过程,以提供与深度无关的骨架表示,而不会扭曲骨架形状。使用从多个角度记录的视频数据集评估了所提出方法的性能。一个专门的基于学习的情绪分类器已经被训练来识别四种情绪(快乐、无聊、兴趣和厌恶),平均准确率为72.5%。预处理的数据集、代码和带有预训练模型的演示可以在https://github.com/michaelneri/emotion-recognition-human-movements上获得。
{"title":"Markerless emotion recognition from full-body movements for Social XR","authors":"Michael Neri ,&nbsp;Sara Baldoni ,&nbsp;Marco Carli ,&nbsp;Federica Battisti","doi":"10.1016/j.image.2026.117489","DOIUrl":"10.1016/j.image.2026.117489","url":null,"abstract":"<div><div>In this work, an emotion recognition system for enhancing social XR applications is presented. Although several techniques for emotion recognition have been proposed in the literature, they either require invasive and advanced equipment or exploit facial expressions, speech excerpts, physiological data, and text. In this contribution, on the contrary, an approach for markerless emotion classification through body language is designed. More specifically, human movements are analyzed over time by extracting the skeleton joints in videos acquired by consumer cameras. A normalization procedure has been introduced to provide a depth-independent skeleton representation without distorting the skeleton shape. The performance of the proposed method have been assessed using a dataset of videos recorded from multiple points of view. An ad-hoc learning-based emotion classifier has been trained to recognize four emotions (happiness, boredom, interest, and disgust) achieving an average accuracy of 72.5%. The pre-processed dataset, code, and demo with pre-trained models are available at <span><span>https://github.com/michaelneri/emotion-recognition-human-movements</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"143 ","pages":"Article 117489"},"PeriodicalIF":2.7,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refining data granularity and feature fusion for boundary refinement in instance segmentation 细化数据粒度和特征融合,用于实例分割的边界细化
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-19 DOI: 10.1016/j.image.2026.117490
Yumeng Yan , Mingming Kong , Maochao Zhang , Shunnan Zhao , Chao Zhang
Considerable efforts have been made in the development of current instance segmentation approaches, but the segmentation of mask boundaries remains a challenge. Feature maps with low spatial resolution, along with the small proportion of edge pixels in relation to the total pixel count, lead to inaccurate boundaries in instance masks. Furthermore, the parsing of feature maps in high resolution networks is typically at a low level, making it difficult for the network to learn deeper semantic features. This paper presents improvements to Boundary Patch Refinement (BPR) for Instance Segmentation to address the above issues. First, we improve the bounding box extraction methods utilized in the data processing, refining the granularity of the data. Second, we introduce a feature fusion approach specifically designed to optimize the feature fusion module within the backbone network. Third, we propose Deep enhancement and Memory optimization (DAM), a module that enhances the network’s ability to learn deeper features, improves its efficiency in acquiring semantic information, and substantially reduces the computational overhead during training. Experimental results demonstrate that our network yields notable improvements in both segmentation accuracy and computational efficiency and outperforms existing methods. The code is available at https://github.com/njezmjez/RDGFBR.
目前的实例分割方法已经取得了长足的发展,但掩码边界的分割仍然是一个挑战。低空间分辨率的特征地图,以及边缘像素相对于总像素数的比例很小,导致实例掩码的边界不准确。此外,在高分辨率网络中,特征映射的解析通常处于较低的层次,这使得网络难以学习更深层次的语义特征。为了解决上述问题,本文提出了边界补丁细化(BPR)的实例分割方法。首先,我们改进了数据处理中使用的边界框提取方法,细化了数据的粒度。其次,我们引入了一种专门设计的特征融合方法,以优化骨干网内的特征融合模块。第三,我们提出了深度增强和记忆优化(DAM)模块,该模块增强了网络学习更深层特征的能力,提高了其获取语义信息的效率,并大大减少了训练过程中的计算开销。实验结果表明,我们的网络在分割精度和计算效率方面都有显著提高,并且优于现有的方法。代码可在https://github.com/njezmjez/RDGFBR上获得。
{"title":"Refining data granularity and feature fusion for boundary refinement in instance segmentation","authors":"Yumeng Yan ,&nbsp;Mingming Kong ,&nbsp;Maochao Zhang ,&nbsp;Shunnan Zhao ,&nbsp;Chao Zhang","doi":"10.1016/j.image.2026.117490","DOIUrl":"10.1016/j.image.2026.117490","url":null,"abstract":"<div><div>Considerable efforts have been made in the development of current instance segmentation approaches, but the segmentation of mask boundaries remains a challenge. Feature maps with low spatial resolution, along with the small proportion of edge pixels in relation to the total pixel count, lead to inaccurate boundaries in instance masks. Furthermore, the parsing of feature maps in high resolution networks is typically at a low level, making it difficult for the network to learn deeper semantic features. This paper presents improvements to Boundary Patch Refinement (BPR) for Instance Segmentation to address the above issues. First, we improve the bounding box extraction methods utilized in the data processing, refining the granularity of the data. Second, we introduce a feature fusion approach specifically designed to optimize the feature fusion module within the backbone network. Third, we propose Deep enhancement and Memory optimization (DAM), a module that enhances the network’s ability to learn deeper features, improves its efficiency in acquiring semantic information, and substantially reduces the computational overhead during training. Experimental results demonstrate that our network yields notable improvements in both segmentation accuracy and computational efficiency and outperforms existing methods. The code is available at <span><span>https://github.com/njezmjez/RDGFBR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"143 ","pages":"Article 117490"},"PeriodicalIF":2.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146001836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing few-shot semantic segmentation in remote sensing through magnitude-based pruning 利用基于量值的剪枝增强遥感少镜头语义分割
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-19 DOI: 10.1016/j.image.2026.117492
Kingsley Amoafo , Godwin Banafo Akrong , Ebenezer Owusu
Few-Shot Semantic Segmentation in remote sensing faces significant challenges owing to the limited availability of labeled data and the complexity of high-resolution imagery. To address these challenges, we propose a novel framework that integrates magnitude-based pruning with Cross-Matching and Self-Matching modules. By systematically pruning 30% of the redundant weights from the backbone network, we enhanced the feature extraction and segmentation accuracy while maintaining the model efficiency. The Cross-Matching Module establishes robust semantic correspondences between the support and query images, whereas the Self-Matching Module refines segmentation through intra-query correlations, incorporating spatial and semantic proximity to improve the feature consistency. Experimental evaluations of the DLRSD-5i and ISAID-5i datasets demonstrated the effectiveness of the proposed method. On DLRSD-5i, the pruned SCCNet achieved a mean mIoU improvement of +9.40 (1-shot) and +6.46 (5-shot) compared to the baseline, outperforming state-of-the-art models. Similarly, on ISAID-5i, the pruned ResNet-101 surpassed SOTA by +1.18 (1-shot) and +0.69 (5-shot) in mean mIoU. These results validate the effectiveness of pruning in optimizing the baseline model for FSSS tasks, thereby enhancing its ability to generalize and accurately segment complex remote-sensing imagery. We demonstrated that high-quality pruned feature maps can enhance segmentation accuracy without the need for additional enhancement modules. This approach not only improves segmentation performance but also provides valuable insights into the role of backbone optimization in FSSS. Our findings highlight the potential of magnitude-based pruning as a foundational strategy for aligning backbone optimization with the demands of few-shot tasks, thereby offering a scalable solution for remote sensing segmentation tasks.
由于标记数据的可用性有限和高分辨率图像的复杂性,遥感中的少镜头语义分割面临着重大挑战。为了解决这些挑战,我们提出了一个新的框架,该框架将基于量的修剪与交叉匹配和自匹配模块集成在一起。通过系统地修剪骨干网络中30%的冗余权值,在保持模型效率的同时,提高了特征提取和分割的精度。交叉匹配模块在支持图像和查询图像之间建立鲁棒的语义对应关系,而自匹配模块通过查询内相关性来细化分割,结合空间和语义接近性来提高特征一致性。对DLRSD-5i和ISAID-5i数据集的实验评估证明了该方法的有效性。在DLRSD-5i上,与基线相比,修剪后的SCCNet的平均mIoU提高了+9.40(1次)和+6.46(5次),优于最先进的模型。同样,在ISAID-5i上,修剪后的ResNet-101的平均mIoU比SOTA高出+1.18(1次)和+0.69(5次)。这些结果验证了修剪在优化FSSS任务基线模型方面的有效性,从而增强了其对复杂遥感图像的泛化和准确分割能力。我们证明了高质量的修剪特征映射可以提高分割精度,而不需要额外的增强模块。这种方法不仅提高了分割性能,而且对骨干优化在FSSS中的作用提供了有价值的见解。我们的研究结果强调了基于震级的剪枝作为一种基本策略的潜力,可以使主干优化与少镜头任务的需求保持一致,从而为遥感分割任务提供可扩展的解决方案。
{"title":"Enhancing few-shot semantic segmentation in remote sensing through magnitude-based pruning","authors":"Kingsley Amoafo ,&nbsp;Godwin Banafo Akrong ,&nbsp;Ebenezer Owusu","doi":"10.1016/j.image.2026.117492","DOIUrl":"10.1016/j.image.2026.117492","url":null,"abstract":"<div><div>Few-Shot Semantic Segmentation in remote sensing faces significant challenges owing to the limited availability of labeled data and the complexity of high-resolution imagery. To address these challenges, we propose a novel framework that integrates magnitude-based pruning with Cross-Matching and Self-Matching modules. By systematically pruning 30% of the redundant weights from the backbone network, we enhanced the feature extraction and segmentation accuracy while maintaining the model efficiency. The Cross-Matching Module establishes robust semantic correspondences between the support and query images, whereas the Self-Matching Module refines segmentation through intra-query correlations, incorporating spatial and semantic proximity to improve the feature consistency. Experimental evaluations of the DLRSD-5i and ISAID-5i datasets demonstrated the effectiveness of the proposed method. On DLRSD-5i, the pruned SCCNet achieved a mean mIoU improvement of +9.40 (1-shot) and +6.46 (5-shot) compared to the baseline, outperforming state-of-the-art models. Similarly, on ISAID-5i, the pruned ResNet-101 surpassed SOTA by +1.18 (1-shot) and +0.69 (5-shot) in mean mIoU. These results validate the effectiveness of pruning in optimizing the baseline model for FSSS tasks, thereby enhancing its ability to generalize and accurately segment complex remote-sensing imagery. We demonstrated that high-quality pruned feature maps can enhance segmentation accuracy without the need for additional enhancement modules. This approach not only improves segmentation performance but also provides valuable insights into the role of backbone optimization in FSSS. Our findings highlight the potential of magnitude-based pruning as a foundational strategy for aligning backbone optimization with the demands of few-shot tasks, thereby offering a scalable solution for remote sensing segmentation tasks.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"143 ","pages":"Article 117492"},"PeriodicalIF":2.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global and interactive graph channel attention for robust stereo matching 全局和交互式图形通道关注鲁棒立体匹配
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-19 DOI: 10.1016/j.image.2026.117491
Jun Yu , Xiaofeng Wang , Yingying Su , Zhiheng Sun , Jiameng Sun
Current learning-based stereo matching is generally poor in adaptively exploring the robust and salient features at different scenes, leading to ambiguity of matching, especially in challenging areas. To tackle this problem, inspired by the global representation of the graph, we propose a Graph Channel Attention (GCA) to globally and interactively learn binocular attention for robust stereo matching, instead of traditional separate local monocular attention. We first construct a 2D binocular graph structure with left and right subgraphs, where the left and right channel information can globally interact. After that, our interactive graph inference with cross interaction and inner aggregation is proposed to improve the linkage inference between and within binocular graphs, which can consider global and interactive attention information like real human eyes. Thus, our GCA alters the channel attention from traditional 1D to binocular 2D, which can imitate the global interaction and attention ability of real human eyes. Finally, we utilize the GCA into stereo matching, and experiment results show that our method demonstrates state-of-the-art performance on KITTI 2012/2015 and Middlebury Stereo Evaluation v.3.
目前基于学习的立体匹配在不同场景下的鲁棒性和显著性特征的自适应挖掘能力较差,导致匹配的模糊性,特别是在具有挑战性的区域。为了解决这个问题,受图的全局表示的启发,我们提出了一种图通道注意(GCA)来全局交互式地学习双眼注意,以实现鲁棒的立体匹配,而不是传统的单独的局部单眼注意。我们首先构建了一个具有左右子图的二维双目图结构,其中左右通道信息可以全局交互。在此基础上,提出了具有交叉交互和内部聚合的交互图推理,改进了双目图之间和双目图内部的联动推理,使其能够像真实人眼一样考虑全局和交互的注意信息。因此,我们的GCA将通道注意力从传统的1D转变为双目2D,可以模仿真实人眼的全局交互和注意力能力。最后,我们将GCA应用到立体匹配中,实验结果表明,我们的方法在KITTI 2012/2015和Middlebury stereo Evaluation v.3上具有最先进的性能。
{"title":"Global and interactive graph channel attention for robust stereo matching","authors":"Jun Yu ,&nbsp;Xiaofeng Wang ,&nbsp;Yingying Su ,&nbsp;Zhiheng Sun ,&nbsp;Jiameng Sun","doi":"10.1016/j.image.2026.117491","DOIUrl":"10.1016/j.image.2026.117491","url":null,"abstract":"<div><div>Current learning-based stereo matching is generally poor in adaptively exploring the robust and salient features at different scenes, leading to ambiguity of matching, especially in challenging areas. To tackle this problem, inspired by the global representation of the graph, we propose a Graph Channel Attention (GCA) to globally and interactively learn binocular attention for robust stereo matching, instead of traditional separate local monocular attention. We first construct a 2D binocular graph structure with left and right subgraphs, where the left and right channel information can globally interact. After that, our interactive graph inference with cross interaction and inner aggregation is proposed to improve the linkage inference between and within binocular graphs, which can consider global and interactive attention information like real human eyes. Thus, our GCA alters the channel attention from traditional 1D to binocular 2D, which can imitate the global interaction and attention ability of real human eyes. Finally, we utilize the GCA into stereo matching, and experiment results show that our method demonstrates state-of-the-art performance on KITTI 2012/2015 and Middlebury Stereo Evaluation v.3.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"143 ","pages":"Article 117491"},"PeriodicalIF":2.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KNN improved Transformer for 3D object detection KNN改进的3D目标检测变压器
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-16 DOI: 10.1016/j.image.2026.117488
Chen Jiang, Shuxia Lu, Xianghu Zhou, Tingting Ma, Junhai Zhai
In recent years, 3D object detection in autonomous driving perception has gained significant attention in the industry. Due to its characteristics, LiDAR has become the most commonly used and essential sensor. However, voxel-based networks often lose context information during the voxelization process, which negatively impacts the detection of small objects. In this paper, we address the challenge of low accuracy in LiDAR-based detection, especially for small object categories, by proposing an improved Transformer structure. Transformers are a type of deep learning model known for their ability to capture long-range dependencies and contextual relationships in data. In our approach, we incorporate a k-Nearest Neighbors (KNN) algorithm, which is a method for identifying the closest points in space, to enhance the spatial relationships between point clouds. This combination allows the model to better capture context information, strengthen feature extraction, and significantly reduce both missed and false detections. Our method is designed to be plug-and-play, allowing it to be directly applied to existing point cloud detectors. We evaluate our approach on the public KITTI and Astyx datasets. Experimental results show significant improvements, especially in detecting small object categories, even in challenging conditions.
近年来,自动驾驶感知中的三维物体检测受到了业界的广泛关注。由于激光雷达的特点,它已成为最常用和必不可少的传感器。然而,基于体素的网络在体素化过程中往往会丢失上下文信息,这对小物体的检测产生了负面影响。在本文中,我们通过提出一种改进的Transformer结构来解决基于lidar的检测精度低的挑战,特别是对于小目标类别。变形金刚是一种深度学习模型,以其捕获数据中的长期依赖关系和上下文关系的能力而闻名。在我们的方法中,我们结合了k-最近邻(KNN)算法,这是一种识别空间中最近点的方法,以增强点云之间的空间关系。这种组合使模型能够更好地捕获上下文信息,加强特征提取,并显着减少遗漏和错误的检测。我们的方法被设计为即插即用,允许它直接应用于现有的点云探测器。我们在公共KITTI和Astyx数据集上评估了我们的方法。实验结果显示了显著的改进,特别是在检测小物体类别方面,即使在具有挑战性的条件下。
{"title":"KNN improved Transformer for 3D object detection","authors":"Chen Jiang,&nbsp;Shuxia Lu,&nbsp;Xianghu Zhou,&nbsp;Tingting Ma,&nbsp;Junhai Zhai","doi":"10.1016/j.image.2026.117488","DOIUrl":"10.1016/j.image.2026.117488","url":null,"abstract":"<div><div>In recent years, 3D object detection in autonomous driving perception has gained significant attention in the industry. Due to its characteristics, LiDAR has become the most commonly used and essential sensor. However, voxel-based networks often lose context information during the voxelization process, which negatively impacts the detection of small objects. In this paper, we address the challenge of low accuracy in LiDAR-based detection, especially for small object categories, by proposing an improved Transformer structure. Transformers are a type of deep learning model known for their ability to capture long-range dependencies and contextual relationships in data. In our approach, we incorporate a k-Nearest Neighbors (KNN) algorithm, which is a method for identifying the closest points in space, to enhance the spatial relationships between point clouds. This combination allows the model to better capture context information, strengthen feature extraction, and significantly reduce both missed and false detections. Our method is designed to be plug-and-play, allowing it to be directly applied to existing point cloud detectors. We evaluate our approach on the public KITTI and Astyx datasets. Experimental results show significant improvements, especially in detecting small object categories, even in challenging conditions.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117488"},"PeriodicalIF":2.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FlyAwareV2: A multimodal cross-domain UAV dataset for urban scene understanding FlyAwareV2:用于城市场景理解的多模态跨域无人机数据集
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-16 DOI: 10.1016/j.image.2026.117483
Francesco Barbato , Matteo Caligiuri , Pietro Zanuttigh
The development of computer vision algorithms for Unmanned Aerial Vehicle (UAV) applications in urban environments heavily relies on the availability of large-scale datasets with accurate annotations. However, collecting and annotating real-world UAV data is extremely challenging and costly. To address this limitation, we present FlyAwareV2, a novel multimodal dataset encompassing both real and synthetic UAV imagery tailored for urban scene understanding tasks. Building upon the recently introduced SynDrone and FlyAware datasets, FlyAwareV2 introduces several new key contributions: (1) Multimodal data (RGB, depth, semantic labels) across diverse environmental conditions including varying weather and daytime; (2) Depth maps for real samples computed via state-of-the-art monocular depth estimation; (3) Benchmarks for RGB and multimodal semantic segmentation on standard architectures; (4) Studies on synthetic-to-real domain adaptation to assess the generalization capabilities of models trained on the synthetic data. With its rich set of annotations and environmental diversity, FlyAwareV2 provides a valuable resource for research on UAV-based 3D urban scene understanding. Dataset link: https://medialab.dei.unipd.it/paper_data/FlyAwareV2
城市环境中无人机(UAV)应用的计算机视觉算法的发展严重依赖于具有准确注释的大规模数据集的可用性。然而,收集和注释真实世界的无人机数据是极具挑战性和昂贵的。为了解决这一限制,我们提出了FlyAwareV2,这是一个新的多模态数据集,包含为城市场景理解任务量身定制的真实和合成无人机图像。基于最近推出的SynDrone和FlyAware数据集,FlyAwareV2引入了几个新的关键贡献:(1)跨不同环境条件(包括不同天气和白天)的多模态数据(RGB,深度,语义标签);(2)通过最先进的单目深度估计计算的真实样本深度图;(3)标准架构下RGB和多模态语义分割的基准测试;(4)研究综合到实际的域自适应,以评估在综合数据上训练的模型的泛化能力。凭借其丰富的注释和环境多样性,FlyAwareV2为基于无人机的3D城市场景理解研究提供了宝贵的资源。数据集链接:https://medialab.dei.unipd.it/paper_data/FlyAwareV2
{"title":"FlyAwareV2: A multimodal cross-domain UAV dataset for urban scene understanding","authors":"Francesco Barbato ,&nbsp;Matteo Caligiuri ,&nbsp;Pietro Zanuttigh","doi":"10.1016/j.image.2026.117483","DOIUrl":"10.1016/j.image.2026.117483","url":null,"abstract":"<div><div>The development of computer vision algorithms for Unmanned Aerial Vehicle (UAV) applications in urban environments heavily relies on the availability of large-scale datasets with accurate annotations. However, collecting and annotating real-world UAV data is extremely challenging and costly. To address this limitation, we present FlyAwareV2, a novel multimodal dataset encompassing both real and synthetic UAV imagery tailored for urban scene understanding tasks. Building upon the recently introduced SynDrone and FlyAware datasets, FlyAwareV2 introduces several new key contributions: (1) Multimodal data (RGB, depth, semantic labels) across diverse environmental conditions including varying weather and daytime; (2) Depth maps for real samples computed via state-of-the-art monocular depth estimation; (3) Benchmarks for RGB and multimodal semantic segmentation on standard architectures; (4) Studies on synthetic-to-real domain adaptation to assess the generalization capabilities of models trained on the synthetic data. With its rich set of annotations and environmental diversity, FlyAwareV2 provides a valuable resource for research on UAV-based 3D urban scene understanding. <strong>Dataset link:</strong> <span><span>https://medialab.dei.unipd.it/paper_data/FlyAwareV2</span><svg><path></path></svg></span></div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117483"},"PeriodicalIF":2.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UHW-former: U-shape hybrid transformer with wavelet-based multi-scale feature fusion for nighttime UAV tracking UHW-former:基于小波多尺度特征融合的u型混合变压器,用于夜间无人机跟踪
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-15 DOI: 10.1016/j.image.2026.117484
Haijun Wang, Haoyu Qu, Lihua Qi, Zihao Su
Most advancements in unmanned aerial vehicle (UAV) tracking have focused on daytime scenarios with optimal lighting conditions. However, the unpredictable and complex noise inherent in camera systems significantly impairs the effectiveness of UAV tracking algorithms, particularly in low-light environments. To address this challenge, we introduce a novel U-shaped plug-and-play denoising network that reduces cluttered and intricate real-world noise, thereby enhancing nighttime UAV tracking performance. Specifically, the U-shaped denoising network utilizes a CNN-Transformer block as the encoder, which incorporates hybrid attention to simultaneously capture both local details and global structures. Additionally, to further improve the denoising effect, we design a wavelet-based multi-scale feature fusion block that adaptively combines features from various stages of the encoding process. Finally, we develop a multi-feature collaboration decoder to fully integrate comprehensive features through multi-head transposed cross-attention. Extensive experiments demonstrate that the proposed UHW-former achieves remarkable denoising performance and significantly enhances nighttime UAV tracking.
无人机(UAV)跟踪的大多数进展都集中在具有最佳照明条件的白天场景。然而,相机系统中固有的不可预测和复杂的噪声显著地削弱了无人机跟踪算法的有效性,特别是在低光环境中。为了应对这一挑战,我们引入了一种新颖的u形即插即用去噪网络,减少了混乱和复杂的现实世界噪声,从而提高了夜间无人机的跟踪性能。具体来说,u形去噪网络利用CNN-Transformer块作为编码器,它结合了混合注意,同时捕获局部细节和全局结构。此外,为了进一步提高去噪效果,我们设计了一个基于小波的多尺度特征融合块,自适应地将编码过程中各个阶段的特征融合在一起。最后,我们开发了一个多特征协同解码器,通过多头转置交叉注意充分集成综合特征。大量实验表明,所提出的UHW-former具有良好的去噪性能,显著提高了无人机夜间跟踪能力。
{"title":"UHW-former: U-shape hybrid transformer with wavelet-based multi-scale feature fusion for nighttime UAV tracking","authors":"Haijun Wang,&nbsp;Haoyu Qu,&nbsp;Lihua Qi,&nbsp;Zihao Su","doi":"10.1016/j.image.2026.117484","DOIUrl":"10.1016/j.image.2026.117484","url":null,"abstract":"<div><div>Most advancements in unmanned aerial vehicle (UAV) tracking have focused on daytime scenarios with optimal lighting conditions. However, the unpredictable and complex noise inherent in camera systems significantly impairs the effectiveness of UAV tracking algorithms, particularly in low-light environments. To address this challenge, we introduce a novel U-shaped plug-and-play denoising network that reduces cluttered and intricate real-world noise, thereby enhancing nighttime UAV tracking performance. Specifically, the U-shaped denoising network utilizes a CNN-Transformer block as the encoder, which incorporates hybrid attention to simultaneously capture both local details and global structures. Additionally, to further improve the denoising effect, we design a wavelet-based multi-scale feature fusion block that adaptively combines features from various stages of the encoding process. Finally, we develop a multi-feature collaboration decoder to fully integrate comprehensive features through multi-head transposed cross-attention. Extensive experiments demonstrate that the proposed UHW-former achieves remarkable denoising performance and significantly enhances nighttime UAV tracking.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117484"},"PeriodicalIF":2.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cryptospace image steganography for cloud security via cycle-consistent GAN 基于周期一致GAN的云安全加密空间图像隐写
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-14 DOI: 10.1016/j.image.2026.117487
Shuying Xu , Chin-Chen Chang , Ji-Hwei Horng , Ching-Chun Chang
Cryptospace steganography has attracted increasing attention as an effective approach for enhancing data security in cloud environments. This paper proposes a hybrid framework that integrates cycle-consistent generative adversarial networks (CycleGAN) with the difference expansion (DE) technique to provide both image encryption and data hiding services. In the proposed framework, an image encryption network and a data encryption network are designed to encrypt the digital image and secret data, respectively, enabling a key-free architecture. A dropout-driven strategy is further introduced to support secure and isolated access control for multiple user groups on a shared cloud platform. Experimental results show that the proposed method achieves an embedding rate above 0.47 bpp and a secret data extraction accuracy exceeding 91%, demonstrating superior performance compared with state-of-the-art methods.
加密空间隐写作为一种增强云环境下数据安全性的有效方法,越来越受到人们的关注。本文提出了一种将循环一致生成对抗网络(CycleGAN)与差分展开(DE)技术相结合的混合框架,以提供图像加密和数据隐藏服务。在提出的框架中,设计了一个图像加密网络和一个数据加密网络,分别对数字图像和秘密数据进行加密,从而实现无密钥架构。进一步引入了退出驱动策略,以支持共享云平台上多个用户组的安全和隔离访问控制。实验结果表明,该方法的嵌入率在0.47 bpp以上,秘密数据提取准确率超过91%,与现有方法相比,性能优越。
{"title":"Cryptospace image steganography for cloud security via cycle-consistent GAN","authors":"Shuying Xu ,&nbsp;Chin-Chen Chang ,&nbsp;Ji-Hwei Horng ,&nbsp;Ching-Chun Chang","doi":"10.1016/j.image.2026.117487","DOIUrl":"10.1016/j.image.2026.117487","url":null,"abstract":"<div><div>Cryptospace steganography has attracted increasing attention as an effective approach for enhancing data security in cloud environments. This paper proposes a hybrid framework that integrates cycle-consistent generative adversarial networks (CycleGAN) with the difference expansion (DE) technique to provide both image encryption and data hiding services. In the proposed framework, an image encryption network and a data encryption network are designed to encrypt the digital image and secret data, respectively, enabling a key-free architecture. A dropout-driven strategy is further introduced to support secure and isolated access control for multiple user groups on a shared cloud platform. Experimental results show that the proposed method achieves an embedding rate above 0.47 bpp and a secret data extraction accuracy exceeding 91%, demonstrating superior performance compared with state-of-the-art methods.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117487"},"PeriodicalIF":2.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UW-SDE: Multi-scale prompt feature guided diffusion model for underwater image enhancement UW-SDE:用于水下图像增强的多尺度提示特征引导扩散模型
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-14 DOI: 10.1016/j.image.2026.117486
Jiaxi Li, Junjun Wu, Qinghua Lu, Ningwei Qin, Shuhong Zhou, Weijian Li
In recent years, diffusion models have achieved remarkable performance in the field of image generation and have been widely applied, with their potential in image enhancement tasks gradually being unearthed. However, when applied to underwater scenes, diffusion models for general image restoration struggle to achieve their expected performance. This is due to the scattering and absorption of light in underwater environments, resulting in underwater images suffering from color distortion, low contrast, and haziness. These issues often co-occur within a single underwater image, making the task of underwater image enhancement more challenging than typical image enhancement tasks. To better adapt diffusion models for underwater image enhancement, this paper proposes an underwater image enhancement method based on latent diffusion model. The proposed model’s latent encoder progressively mitigates adverse degradation factors embedded within the hidden layers, while preserving essential image feature information in the latent representation, thus enabling a smoother diffusion process. Additionally, we design a gated fusion network that integrates guiding features at multiple scales, steering the network towards diffusion with superior visual quality restoration. A series of qualitative and quantitative experiments conducted on various real-world underwater image datasets demonstrate that our proposed method outperforms recent state-of-the-art methods in terms of visual effects and generalization capabilities, proving the effectiveness of our approach in applying diffusion model to underwater enhancement tasks.
近年来,扩散模型在图像生成领域取得了显著的成绩,得到了广泛的应用,其在图像增强任务中的潜力也逐渐被挖掘出来。然而,当应用于水下场景时,用于一般图像恢复的扩散模型很难达到预期的效果。这是由于光线在水下环境中的散射和吸收,导致水下图像遭受色彩失真,低对比度和模糊。这些问题往往同时出现在单个水下图像中,使得水下图像增强任务比典型的图像增强任务更具挑战性。为了更好地适应扩散模型对水下图像的增强,本文提出了一种基于潜扩散模型的水下图像增强方法。该模型的潜在编码器逐步减轻嵌入在隐藏层中的不利退化因素,同时在潜在表示中保留基本的图像特征信息,从而实现更平滑的扩散过程。此外,我们设计了一个门控融合网络,该网络集成了多个尺度的引导特征,使网络向扩散方向发展,并具有卓越的视觉质量恢复。在各种真实世界水下图像数据集上进行的一系列定性和定量实验表明,我们提出的方法在视觉效果和泛化能力方面优于最近最先进的方法,证明了我们的方法在将扩散模型应用于水下增强任务方面的有效性。
{"title":"UW-SDE: Multi-scale prompt feature guided diffusion model for underwater image enhancement","authors":"Jiaxi Li,&nbsp;Junjun Wu,&nbsp;Qinghua Lu,&nbsp;Ningwei Qin,&nbsp;Shuhong Zhou,&nbsp;Weijian Li","doi":"10.1016/j.image.2026.117486","DOIUrl":"10.1016/j.image.2026.117486","url":null,"abstract":"<div><div>In recent years, diffusion models have achieved remarkable performance in the field of image generation and have been widely applied, with their potential in image enhancement tasks gradually being unearthed. However, when applied to underwater scenes, diffusion models for general image restoration struggle to achieve their expected performance. This is due to the scattering and absorption of light in underwater environments, resulting in underwater images suffering from color distortion, low contrast, and haziness. These issues often co-occur within a single underwater image, making the task of underwater image enhancement more challenging than typical image enhancement tasks. To better adapt diffusion models for underwater image enhancement, this paper proposes an underwater image enhancement method based on latent diffusion model. The proposed model’s latent encoder progressively mitigates adverse degradation factors embedded within the hidden layers, while preserving essential image feature information in the latent representation, thus enabling a smoother diffusion process. Additionally, we design a gated fusion network that integrates guiding features at multiple scales, steering the network towards diffusion with superior visual quality restoration. A series of qualitative and quantitative experiments conducted on various real-world underwater image datasets demonstrate that our proposed method outperforms recent state-of-the-art methods in terms of visual effects and generalization capabilities, proving the effectiveness of our approach in applying diffusion model to underwater enhancement tasks.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117486"},"PeriodicalIF":2.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new baseline for edge detection: Make encoder–decoder great again 边缘检测的新基线:使编码器-解码器再次伟大
IF 2.7 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-14 DOI: 10.1016/j.image.2026.117485
Yachuan Li , Xavier Soria Poma , Yongke Xi , Guanlin Li , Chaozhi Yang , Qian Xiao , Yun Bai , Zongmin Li
The performance of deep learning based edge detectors has surpassed human performance, but the huge computational cost and complex training strategies hinder their further development and application. In this paper, we alleviate these complexities with a vanilla encoder–decoder based detector. Firstly, we design a bilateral encoder to decouple the extraction process of spatial features and semantic features. As the spatial branch no longer guides the semantic branch, feature richness can be reduced, enabling a more compact model design. We propose a cascaded feature fusion decoder, where the spatial features are progressively refined by semantic features. The refined spatial features are the only basis for generating the edge map. The coarse original spatial features and semantic features are avoided from direct contact with the final result. So the noise in the spatial features and the location error in the semantic features can be suppressed in the generated edge map. The proposed New Baseline for Edge Detection (NBED) achieves superior performance consistently across multiple edge detection benchmarks, even compared with those methods with huge computational costs and complex training strategies. The ODS of NBED on BSDS500 is 0.838, achieving state-of-the-art performance. Our study highlights that high-quality features are key to modern edge detection, and encoder–decoder based detectors can achieve excellent performance without complex training or heavy computation. Furthermore, we take retinal vessel segmentation as an example to explore the application of NBED in downstream tasks. The code is available at https://github.com/Li-yachuan/NBED.
基于深度学习的边缘检测器的性能已经超越了人类的性能,但巨大的计算成本和复杂的训练策略阻碍了它们的进一步发展和应用。在本文中,我们使用基于普通编码器-解码器的检测器来减轻这些复杂性。首先,我们设计了一个双边编码器来解耦空间特征和语义特征的提取过程。由于空间分支不再引导语义分支,可以减少特征丰富度,使模型设计更加紧凑。我们提出了一种级联特征融合解码器,其中空间特征通过语义特征逐步细化。精细的空间特征是生成边缘图的唯一依据。避免了粗糙的原始空间特征和语义特征与最终结果的直接接触。因此,生成的边缘图可以有效地抑制空间特征中的噪声和语义特征中的位置误差。即使与那些具有巨大计算成本和复杂训练策略的方法相比,所提出的边缘检测新基线(NBED)在多个边缘检测基准上也具有一致的优越性能。NBED在BSDS500上的ODS为0.838,达到了最先进的性能。我们的研究强调了高质量的特征是现代边缘检测的关键,基于编码器-解码器的检测器可以在不需要复杂训练或大量计算的情况下获得出色的性能。此外,我们以视网膜血管分割为例,探讨NBED在下游任务中的应用。代码可在https://github.com/Li-yachuan/NBED上获得。
{"title":"A new baseline for edge detection: Make encoder–decoder great again","authors":"Yachuan Li ,&nbsp;Xavier Soria Poma ,&nbsp;Yongke Xi ,&nbsp;Guanlin Li ,&nbsp;Chaozhi Yang ,&nbsp;Qian Xiao ,&nbsp;Yun Bai ,&nbsp;Zongmin Li","doi":"10.1016/j.image.2026.117485","DOIUrl":"10.1016/j.image.2026.117485","url":null,"abstract":"<div><div>The performance of deep learning based edge detectors has surpassed human performance, but the huge computational cost and complex training strategies hinder their further development and application. In this paper, we alleviate these complexities with a vanilla encoder–decoder based detector. Firstly, we design a bilateral encoder to decouple the extraction process of spatial features and semantic features. As the spatial branch no longer guides the semantic branch, feature richness can be reduced, enabling a more compact model design. We propose a cascaded feature fusion decoder, where the spatial features are progressively refined by semantic features. The refined spatial features are the only basis for generating the edge map. The coarse original spatial features and semantic features are avoided from direct contact with the final result. So the noise in the spatial features and the location error in the semantic features can be suppressed in the generated edge map. The proposed New Baseline for Edge Detection (NBED) achieves superior performance consistently across multiple edge detection benchmarks, even compared with those methods with huge computational costs and complex training strategies. The ODS of NBED on BSDS500 is 0.838, achieving state-of-the-art performance. Our study highlights that high-quality features are key to modern edge detection, and encoder–decoder based detectors can achieve excellent performance without complex training or heavy computation. Furthermore, we take retinal vessel segmentation as an example to explore the application of NBED in downstream tasks. The code is available at <span><span>https://github.com/Li-yachuan/NBED</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"142 ","pages":"Article 117485"},"PeriodicalIF":2.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1