首页 > 最新文献

Multimedia Systems最新文献

英文 中文
Generating generalized zero-shot learning based on dual-path feature enhancement 基于双路径特征增强生成广义零点学习
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-19 DOI: 10.1007/s00530-024-01485-8
Xinyi Chang, Zhen Wang, Wenhao Liu, Limeng Gao, Bingshuai Yan

Generalized zero-shot learning (GZSL) can classify both seen and unseen class samples, which plays a significant role in practical applications such as emerging species recognition and medical image recognition. However, most existing GZSL methods directly use the pre-trained deep model to learn the image feature. Due to the data distribution inconsistency between the GZSL dataset and the pre-training dataset, the obtained image features have an inferior performance. The distribution of different class image features is similar, which makes them difficult to distinguish. To solve this problem, we propose a dual-path feature enhancement (DPFE) model, which consists of four modules: the feature generation network (FGN), the local fine-grained feature enhancement (LFFE) module, the global coarse-grained feature enhancement (GCFE) module, and the feedback module (FM). The feature generation network can synthesize unseen class image features. We enhance the image features’ discriminative and semantic relevance from both local and global perspectives. To focus on the image’s local discriminative regions, the LFFE module processes the image in blocks and minimizes the semantic cycle-consistency loss to ensure that the region block features contain key classification semantic information. To prevent information loss caused by image blocking, we design the GCFE module. It ensures the consistency between the global image features and the semantic centers, thereby improving the discriminative power of the features. In addition, the feedback module feeds the discriminator network’s middle layer information back to the generator network. As a result, the synthesized image features are more similar to the real features. Experimental results demonstrate that the proposed DPFE method outperforms the state-of-the-arts on four zero-shot learning benchmark datasets.

广义零点学习(Generalized zero-shot learning,GZSL)可以对看到和未看到的类样本进行分类,在新兴物种识别和医学图像识别等实际应用中发挥着重要作用。然而,现有的 GZSL 方法大多直接使用预先训练好的深度模型来学习图像特征。由于 GZSL 数据集和预训练数据集的数据分布不一致,得到的图像特征性能较差。不同类别的图像特征分布相似,难以区分。为了解决这个问题,我们提出了一种双路径特征增强(DPFE)模型,它由四个模块组成:特征生成网络(FGN)、局部细粒度特征增强(LFFE)模块、全局粗粒度特征增强(GCFE)模块和反馈模块(FM)。特征生成网络可以合成未见类图像特征。我们从局部和全局两个角度增强图像特征的辨别力和语义相关性。为了聚焦图像的局部判别区域,LFFE 模块对图像进行分块处理,最大限度地减少语义循环一致性损失,确保区域块特征包含关键的分类语义信息。为了防止图像分块造成的信息损失,我们设计了 GCFE 模块。它确保了全局图像特征与语义中心之间的一致性,从而提高了特征的判别能力。此外,反馈模块将鉴别器网络的中间层信息反馈给生成器网络。因此,合成的图像特征与真实特征更加相似。实验结果表明,所提出的 DPFE 方法在四个零点学习基准数据集上的表现优于同行。
{"title":"Generating generalized zero-shot learning based on dual-path feature enhancement","authors":"Xinyi Chang, Zhen Wang, Wenhao Liu, Limeng Gao, Bingshuai Yan","doi":"10.1007/s00530-024-01485-8","DOIUrl":"https://doi.org/10.1007/s00530-024-01485-8","url":null,"abstract":"<p>Generalized zero-shot learning (GZSL) can classify both seen and unseen class samples, which plays a significant role in practical applications such as emerging species recognition and medical image recognition. However, most existing GZSL methods directly use the pre-trained deep model to learn the image feature. Due to the data distribution inconsistency between the GZSL dataset and the pre-training dataset, the obtained image features have an inferior performance. The distribution of different class image features is similar, which makes them difficult to distinguish. To solve this problem, we propose a dual-path feature enhancement (DPFE) model, which consists of four modules: the feature generation network (FGN), the local fine-grained feature enhancement (LFFE) module, the global coarse-grained feature enhancement (GCFE) module, and the feedback module (FM). The feature generation network can synthesize unseen class image features. We enhance the image features’ discriminative and semantic relevance from both local and global perspectives. To focus on the image’s local discriminative regions, the LFFE module processes the image in blocks and minimizes the semantic cycle-consistency loss to ensure that the region block features contain key classification semantic information. To prevent information loss caused by image blocking, we design the GCFE module. It ensures the consistency between the global image features and the semantic centers, thereby improving the discriminative power of the features. In addition, the feedback module feeds the discriminator network’s middle layer information back to the generator network. As a result, the synthesized image features are more similar to the real features. Experimental results demonstrate that the proposed DPFE method outperforms the state-of-the-arts on four zero-shot learning benchmark datasets.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"8 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Triple fusion and feature pyramid decoder for RGB-D semantic segmentation 用于 RGB-D 语义分割的三重融合和特征金字塔解码器
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-16 DOI: 10.1007/s00530-024-01459-w
Bin Ge, Xu Zhu, Zihan Tang, Chenxing Xia, Yiming Lu, Zhuang Chen

Current RGB-D semantic segmentation networks incorporate depth information as an extra modality and merge RGB and depth features using methods such as equal-weighted concatenation or simple fusion strategies. However, these methods hinder the effective utilization of cross-modal information. Aiming at the problem that existing RGB-D semantic segmentation networks fail to fully utilize RGB and depth features, we propose an RGB-D semantic segmentation network, based on triple fusion and feature pyramid decoding, which achieves bidirectional interaction and fusion of RGB and depth features via the proposed three-stage cross-modal fusion module (TCFM). The TCFM proposes utilizing cross-modal cross-attention to intermix the data from two modalities into another modality. It fuses the RGB attributes and depth features proficiently, utilizing the channel-adaptive weighted fusion module. Furthermore, this paper introduces a lightweight feature pyramidal decoder network to fuse the multi-scale parts taken out by the encoder effectively. Experiments on NYU Depth V2 and SUN RGB-D datasets demonstrate that the cross-modal feature fusion network proposed in this study efficiently segments intricate scenes.

目前的 RGB-D 语义分割网络将深度信息作为一种额外的模式,并使用等权重串联或简单融合策略等方法合并 RGB 和深度特征。然而,这些方法阻碍了跨模态信息的有效利用。针对现有的 RGB-D 语义分割网络无法充分利用 RGB 和深度特征的问题,我们提出了一种基于三重融合和特征金字塔解码的 RGB-D 语义分割网络,通过所提出的三级跨模态融合模块(TCFM)实现 RGB 和深度特征的双向交互和融合。TCFM 建议利用跨模态交叉关注将两种模态的数据混合到另一种模态中。它利用信道自适应加权融合模块,将 RGB 属性和深度特征进行了很好的融合。此外,本文还引入了轻量级特征金字塔解码器网络,以有效融合编码器提取的多尺度部分。纽约大学深度 V2 数据集和 SUN RGB-D 数据集的实验表明,本研究提出的跨模态特征融合网络能有效地分割复杂场景。
{"title":"Triple fusion and feature pyramid decoder for RGB-D semantic segmentation","authors":"Bin Ge, Xu Zhu, Zihan Tang, Chenxing Xia, Yiming Lu, Zhuang Chen","doi":"10.1007/s00530-024-01459-w","DOIUrl":"https://doi.org/10.1007/s00530-024-01459-w","url":null,"abstract":"<p>Current RGB-D semantic segmentation networks incorporate depth information as an extra modality and merge RGB and depth features using methods such as equal-weighted concatenation or simple fusion strategies. However, these methods hinder the effective utilization of cross-modal information. Aiming at the problem that existing RGB-D semantic segmentation networks fail to fully utilize RGB and depth features, we propose an RGB-D semantic segmentation network, based on triple fusion and feature pyramid decoding, which achieves bidirectional interaction and fusion of RGB and depth features via the proposed three-stage cross-modal fusion module (TCFM). The TCFM proposes utilizing cross-modal cross-attention to intermix the data from two modalities into another modality. It fuses the RGB attributes and depth features proficiently, utilizing the channel-adaptive weighted fusion module. Furthermore, this paper introduces a lightweight feature pyramidal decoder network to fuse the multi-scale parts taken out by the encoder effectively. Experiments on NYU Depth V2 and SUN RGB-D datasets demonstrate that the cross-modal feature fusion network proposed in this study efficiently segments intricate scenes.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"38 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic lymph node segmentation using deep parallel squeeze & excitation and attention Unet 利用深度并行挤压和激励以及关注 Unet 自动进行淋巴结分割
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-13 DOI: 10.1007/s00530-024-01465-y
Zhaorui Liu, Hao Chen, Caiyin Tang, Quan Li, Tao Peng

Automatic segmentation and lymph node (LN) detection for cancer staging are critical. In clinical practice, computed tomography (CT) and positron emission tomography (PET) imaging detect abnormal LNs. Yet, it is still a difficult task due to the low contrast of LNs and surrounding soft tissues and the variation in nodal size and shape. We designed a location-guided 3D dual network for LN segmentation. A localization module generates Gaussian masks focused on LNs centralized within selected regions of interest (ROI). Our segmentation model incorporated squeeze & excitation (SE) and attention gate (AG) modules into a conventional 3D UNet architecture to boost useful feature utilization and increase usable feature utilization and segmentation accuracy. Lastly, we provide a simple boundary refinement module to polish the outcomes. We assessed the location-guided LN segmentation network’s performance on a clinical dataset with head and neck cancer. The location-guided network outperformed a comparable architecture without the Gaussian mask in terms of performance.

自动分割和淋巴结(LN)检测对于癌症分期至关重要。在临床实践中,计算机断层扫描(CT)和正电子发射断层扫描(PET)成像可检测异常淋巴结。然而,由于 LN 与周围软组织的对比度较低,而且结节的大小和形状各不相同,因此这仍然是一项艰巨的任务。我们设计了一种用于 LN 分割的定位导航 3D 双网络。定位模块生成高斯掩膜,聚焦于集中在选定感兴趣区(ROI)内的LN。我们的分割模型将挤压& 激发(SE)和注意门(AG)模块纳入传统的三维 UNet 架构,以提高有用特征的利用率,并增加有用特征的利用率和分割准确性。最后,我们提供了一个简单的边界细化模块来完善结果。我们在头颈部癌症临床数据集上评估了位置引导的 LN 分割网络的性能。就性能而言,位置引导网络优于没有高斯掩膜的同类架构。
{"title":"Automatic lymph node segmentation using deep parallel squeeze & excitation and attention Unet","authors":"Zhaorui Liu, Hao Chen, Caiyin Tang, Quan Li, Tao Peng","doi":"10.1007/s00530-024-01465-y","DOIUrl":"https://doi.org/10.1007/s00530-024-01465-y","url":null,"abstract":"<p>Automatic segmentation and lymph node (LN) detection for cancer staging are critical. In clinical practice, computed tomography (CT) and positron emission tomography (PET) imaging detect abnormal LNs. Yet, it is still a difficult task due to the low contrast of LNs and surrounding soft tissues and the variation in nodal size and shape. We designed a location-guided 3D dual network for LN segmentation. A localization module generates Gaussian masks focused on LNs centralized within selected regions of interest (ROI). Our segmentation model incorporated squeeze &amp; excitation (SE) and attention gate (AG) modules into a conventional 3D UNet architecture to boost useful feature utilization and increase usable feature utilization and segmentation accuracy. Lastly, we provide a simple boundary refinement module to polish the outcomes. We assessed the location-guided LN segmentation network’s performance on a clinical dataset with head and neck cancer. The location-guided network outperformed a comparable architecture without the Gaussian mask in terms of performance.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"1 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAFIN: cross-attention based face image repair network CAFIN:基于交叉注意力的人脸图像修复网络
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-13 DOI: 10.1007/s00530-024-01466-x
Yaqian Li, Kairan Li, Haibin Li, Wenming Zhang

To address issues such as instability during the training of Generative Adversarial Networks, insufficient clarity in facial structure restoration, inadequate utilization of known information, and lack of attention to color information in images, a Cross-Attention Restoration Network is proposed. Initially, in the decoding part of the basic first-stage U-Net network, a combination of sub-pixel convolution and upsampling modules is employed to remedy the low-quality image restoration issue associated with single upsampling in the image recovery process. Subsequently, the restoration part of the first-stage network and the un-restored images are used to compute cross-attention in both spatial and channel dimensions, recovering the complete facial restoration image from the known repaired information. At the same time, we propose a loss function based on HSV space, assigning appropriate weights within the function to significantly improve the color aspects of the image. Compared to classical methods, this model exhibits good performance in terms of peak signal-to-noise ratio, structural similarity, and FID.

为了解决生成式对抗网络在训练过程中的不稳定性、面部结构还原不够清晰、已知信息利用不足以及对图像中的颜色信息缺乏关注等问题,我们提出了交叉关注还原网络。首先,在基本的第一阶段 U-Net 网络的解码部分,采用了亚像素卷积和上采样模块的组合,以弥补图像复原过程中单一上采样带来的低质量图像复原问题。随后,利用第一阶段网络的修复部分和未修复的图像计算空间和通道维度的交叉注意力,从已知的修复信息中恢复完整的面部修复图像。同时,我们提出了基于 HSV 空间的损失函数,在函数中分配适当的权重,以显著改善图像的色彩方面。与传统方法相比,该模型在峰值信噪比、结构相似性和 FID 方面表现出良好的性能。
{"title":"CAFIN: cross-attention based face image repair network","authors":"Yaqian Li, Kairan Li, Haibin Li, Wenming Zhang","doi":"10.1007/s00530-024-01466-x","DOIUrl":"https://doi.org/10.1007/s00530-024-01466-x","url":null,"abstract":"<p>To address issues such as instability during the training of Generative Adversarial Networks, insufficient clarity in facial structure restoration, inadequate utilization of known information, and lack of attention to color information in images, a Cross-Attention Restoration Network is proposed. Initially, in the decoding part of the basic first-stage U-Net network, a combination of sub-pixel convolution and upsampling modules is employed to remedy the low-quality image restoration issue associated with single upsampling in the image recovery process. Subsequently, the restoration part of the first-stage network and the un-restored images are used to compute cross-attention in both spatial and channel dimensions, recovering the complete facial restoration image from the known repaired information. At the same time, we propose a loss function based on HSV space, assigning appropriate weights within the function to significantly improve the color aspects of the image. Compared to classical methods, this model exhibits good performance in terms of peak signal-to-noise ratio, structural similarity, and FID.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"191 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on deep learning-based camouflaged object detection 基于深度学习的伪装物体检测调查
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-11 DOI: 10.1007/s00530-024-01478-7
Junmin Zhong, Anzhi Wang, Chunhong Ren, Jintao Wu

Camouflaged object detection (COD) is an emerging visual detection task that aims to identify objects that conceal themselves in the surrounding environment. The high intrinsic similarities between the camouflaged objects and their backgrounds make COD far more challenging than traditional object detection. Recently, COD has attracted increasing research interest in the computer vision community, and numerous deep learning-based methods have been proposed, showing great potential. However, most of the existing work focuses on analyzing the structure of COD models, with few overview works summarizing deep learning-based models. To address this gap, we provide a comprehensive analysis and summary of deep learning-based COD models. Specifically, we first classify 48 deep learning-based COD models and analyze their advantages and disadvantages. Second, we introduce widely available datasets for COD and performance evaluation metrics. Then, we evaluate the performance of existing deep learning-based COD models on these four datasets. Finally, we indicate relevant applications and discuss challenges and future research directions for the COD task.

伪装物体检测(COD)是一项新兴的视觉检测任务,旨在识别隐藏在周围环境中的物体。伪装物体与其背景之间的内在相似性很高,这使得伪装物体检测比传统的物体检测更具挑战性。最近,COD 在计算机视觉领域引起了越来越多的研究兴趣,并提出了许多基于深度学习的方法,显示出巨大的潜力。然而,现有的大部分工作都集中在分析 COD 模型的结构上,很少有综述性的作品对基于深度学习的模型进行总结。针对这一空白,我们对基于深度学习的 COD 模型进行了全面的分析和总结。具体来说,我们首先对 48 种基于深度学习的 COD 模型进行了分类,并分析了它们的优缺点。其次,我们介绍了广泛可用的 COD 数据集和性能评估指标。然后,我们在这四个数据集上评估了现有基于深度学习的 COD 模型的性能。最后,我们指出了相关应用,并讨论了 COD 任务面临的挑战和未来的研究方向。
{"title":"A survey on deep learning-based camouflaged object detection","authors":"Junmin Zhong, Anzhi Wang, Chunhong Ren, Jintao Wu","doi":"10.1007/s00530-024-01478-7","DOIUrl":"https://doi.org/10.1007/s00530-024-01478-7","url":null,"abstract":"<p>Camouflaged object detection (COD) is an emerging visual detection task that aims to identify objects that conceal themselves in the surrounding environment. The high intrinsic similarities between the camouflaged objects and their backgrounds make COD far more challenging than traditional object detection. Recently, COD has attracted increasing research interest in the computer vision community, and numerous deep learning-based methods have been proposed, showing great potential. However, most of the existing work focuses on analyzing the structure of COD models, with few overview works summarizing deep learning-based models. To address this gap, we provide a comprehensive analysis and summary of deep learning-based COD models. Specifically, we first classify 48 deep learning-based COD models and analyze their advantages and disadvantages. Second, we introduce widely available datasets for COD and performance evaluation metrics. Then, we evaluate the performance of existing deep learning-based COD models on these four datasets. Finally, we indicate relevant applications and discuss challenges and future research directions for the COD task.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"24 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Instance segmentation of faces and mouth-opening degrees based on improved YOLOv8 method 基于改进的 YOLOv8 方法的人脸实例分割和张口度分析
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-11 DOI: 10.1007/s00530-024-01472-z
Yuhe Fan, Lixun Zhang, Canxing Zheng, Xingyuan Wang, Jinghui Zhu, Lan Wang

Instance segmentation of faces and mouth-opening degrees is an important technology for meal-assisting robotics in food delivery safety. However, due to the diversity in in shape, color, and posture of faces and the mouth with small area contour, easy to deform, and occluded, it is challenging to real-time and accurate instance segmentation. In this paper, we proposed a novel method for instance segmentation of faces and mouth-opening degrees. Specifically, in backbone network, deformable convolution was introduced to enhance the ability to capture finer-grained spatial information and the CloFormer module was introduced to improve the ability to capture high-frequency local and low-frequency global information. In neck network, classical convolution and C2f modules are replaced by GSConv and VoV-GSCSP aggregation modules, respectively, to reduce the complexity and floating-point operations of models. Finally, in localization loss, CIOU loss was replaced by WIOU loss to reduce the competitiveness of high-quality anchor frames and mask the influence of low-quality samples, which in turn improves localization accuracy and generalization ability. It is abbreviated as the DCGW-YOLOv8n-seg model. The DCGW-YOLOv8n-seg model was compared with the baseline YOLOv8n-seg model and several state-of-the-art instance segmentation models on datasets, respectively. The results show that the DCGW-YOLOv8n-seg model is characterized by high accuracy, speed, robustness, and generalization ability. The effectiveness of each improvement in improving the model performance was verified by ablation experiments. Finally, the DCGW-YOLOv8n-seg model was applied to the instance segmentation experiment of meal-assisting robotics. The results show that the DCGW-YOLOv8n-seg model can better realize the instance segmentation effect of faces and mouth-opening degrees. The novel method proposed can provide a guiding theoretical basis for meal-assisting robotics in food delivery safety and can provide a reference value for computer vision and image instance segmentation.

人脸和嘴巴张开度的实例分割是食品配送安全领域中助餐机器人的一项重要技术。然而,由于人脸的形状、颜色和姿态多种多样,且嘴部轮廓面积小、易变形、易遮挡,要实时准确地进行实例分割具有很大的挑战性。本文提出了一种新颖的人脸和张嘴度实例分割方法。具体来说,在骨干网络中,引入了可变形卷积,以增强捕捉更精细空间信息的能力;引入了 CloFormer 模块,以增强捕捉高频局部和低频全局信息的能力。在颈部网络中,经典的卷积和 C2f 模块分别被 GSConv 和 VoV-GSCSP 聚合模块取代,以降低模型的复杂性和浮点运算。最后,在定位损失方面,用 WIOU 损失代替 CIOU 损失,以降低高质量锚帧的竞争力,掩盖低质量样本的影响,从而提高定位精度和泛化能力。简称为 DCGW-YOLOv8n-seg 模型。研究人员分别将 DCGW-YOLOv8n-seg 模型与基准 YOLOv8n-seg 模型和几个最先进的实例分割模型在数据集上进行了比较。结果表明,DCGW-YOLOv8n-seg 模型具有精度高、速度快、鲁棒性强和泛化能力强的特点。消融实验验证了每种改进在提高模型性能方面的有效性。最后,将 DCGW-YOLOv8n-seg 模型应用于助餐机器人的实例分割实验。结果表明,DCGW-YOLOv8n-seg 模型能更好地实现人脸和张口度的实例分割效果。所提出的新方法可为助餐机器人在食品配送安全方面提供指导性理论依据,并可为计算机视觉和图像实例分割提供参考价值。
{"title":"Instance segmentation of faces and mouth-opening degrees based on improved YOLOv8 method","authors":"Yuhe Fan, Lixun Zhang, Canxing Zheng, Xingyuan Wang, Jinghui Zhu, Lan Wang","doi":"10.1007/s00530-024-01472-z","DOIUrl":"https://doi.org/10.1007/s00530-024-01472-z","url":null,"abstract":"<p>Instance segmentation of faces and mouth-opening degrees is an important technology for meal-assisting robotics in food delivery safety. However, due to the diversity in in shape, color, and posture of faces and the mouth with small area contour, easy to deform, and occluded, it is challenging to real-time and accurate instance segmentation. In this paper, we proposed a novel method for instance segmentation of faces and mouth-opening degrees. Specifically, in backbone network, deformable convolution was introduced to enhance the ability to capture finer-grained spatial information and the CloFormer module was introduced to improve the ability to capture high-frequency local and low-frequency global information. In neck network, classical convolution and C2f modules are replaced by GSConv and VoV-GSCSP aggregation modules, respectively, to reduce the complexity and floating-point operations of models. Finally, in localization loss, CIOU loss was replaced by WIOU loss to reduce the competitiveness of high-quality anchor frames and mask the influence of low-quality samples, which in turn improves localization accuracy and generalization ability. It is abbreviated as the DCGW-YOLOv8n-seg model. The DCGW-YOLOv8n-seg model was compared with the baseline YOLOv8n-seg model and several state-of-the-art instance segmentation models on datasets, respectively. The results show that the DCGW-YOLOv8n-seg model is characterized by high accuracy, speed, robustness, and generalization ability. The effectiveness of each improvement in improving the model performance was verified by ablation experiments. Finally, the DCGW-YOLOv8n-seg model was applied to the instance segmentation experiment of meal-assisting robotics. The results show that the DCGW-YOLOv8n-seg model can better realize the instance segmentation effect of faces and mouth-opening degrees. The novel method proposed can provide a guiding theoretical basis for meal-assisting robotics in food delivery safety and can provide a reference value for computer vision and image instance segmentation.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"11 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implicit neural representation steganography by neuron pruning 通过神经元修剪实现隐式神经表征隐写术
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-10 DOI: 10.1007/s00530-024-01476-9
Weina Dong, Jia Liu, Lifeng Chen, Wenquan Sun, Xiaozhong Pan, Yan Ke

Recently, implicit neural representation (INR) has started to be applied in image steganography. However, the quality of stego and secret images represented by INR is generally low. In this paper, we propose an implicit neural representation steganography method by neuron pruning. Initially, we randomly deactivate a portion of neurons to train an INR function for implicitly representing the secret image. Subsequently, we prune the neurons that are deemed unimportant for representing the secret image in a unstructured manner to obtain a secret function, while marking the positions of neurons as the key. Finally, based on a partial optimization strategy, we reactivate the pruned neurons to construct a stego function for representing the cover image. The recipient only needs the shared key to recover the secret function from the stego function in order to reconstruct the secret image. Experimental results demonstrate that this method not only allows for lossless recovery of the secret image, but also performs well in terms of capacity, fidelity, and undetectability. The experiments conducted on images of different resolutions validate that our proposed method exhibits significant advantages in image quality over existing implicit representation steganography methods.

最近,隐式神经表示(INR)开始应用于图像隐写术。然而,用 INR 表示的偷窃图像和秘密图像的质量普遍较低。本文提出了一种通过神经元剪枝的隐式神经表示隐写方法。首先,我们随机停用一部分神经元来训练隐式表示秘密图像的 INR 函数。随后,我们以非结构化的方式修剪被认为对表示秘密图像不重要的神经元,以获得秘密函数,同时将神经元的位置标记为关键。最后,基于局部优化策略,我们重新激活被剪切的神经元,从而构建一个用于表示封面图像的偷窃函数。接收者只需要共享密钥就能从隐去函数中恢复秘密函数,从而重建秘密图像。实验结果表明,这种方法不仅可以无损地恢复秘密图像,而且在容量、保真度和不可检测性方面表现出色。在不同分辨率的图像上进行的实验证明,与现有的隐式表示隐写术方法相比,我们提出的方法在图像质量上具有显著优势。
{"title":"Implicit neural representation steganography by neuron pruning","authors":"Weina Dong, Jia Liu, Lifeng Chen, Wenquan Sun, Xiaozhong Pan, Yan Ke","doi":"10.1007/s00530-024-01476-9","DOIUrl":"https://doi.org/10.1007/s00530-024-01476-9","url":null,"abstract":"<p>Recently, implicit neural representation (INR) has started to be applied in image steganography. However, the quality of stego and secret images represented by INR is generally low. In this paper, we propose an implicit neural representation steganography method by neuron pruning. Initially, we randomly deactivate a portion of neurons to train an INR function for implicitly representing the secret image. Subsequently, we prune the neurons that are deemed unimportant for representing the secret image in a unstructured manner to obtain a secret function, while marking the positions of neurons as the key. Finally, based on a partial optimization strategy, we reactivate the pruned neurons to construct a stego function for representing the cover image. The recipient only needs the shared key to recover the secret function from the stego function in order to reconstruct the secret image. Experimental results demonstrate that this method not only allows for lossless recovery of the secret image, but also performs well in terms of capacity, fidelity, and undetectability. The experiments conducted on images of different resolutions validate that our proposed method exhibits significant advantages in image quality over existing implicit representation steganography methods.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"258 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale motion contrastive learning for self-supervised skeleton-based action recognition 基于自我监督骨架的动作识别多尺度运动对比学习
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-10 DOI: 10.1007/s00530-024-01463-0
Yushan Wu, Zengmin Xu, Mengwei Yuan, Tianchi Tang, Ruxing Meng, Zhongyuan Wang

People process things and express feelings through actions, action recognition has been able to be widely studied, yet under-explored. Traditional self-supervised skeleton-based action recognition focus on joint point features, ignoring the inherent semantic information of body structures at different scales. To address this problem, we propose a multi-scale Motion Contrastive Learning of Visual Representations (MsMCLR) model. The model utilizes the Multi-scale Motion Attention (MsM Attention) module to divide the skeletal features into three scale levels, extracting cross-frame and cross-node motion features from them. To obtain more motion patterns, a combination of strong data augmentation is used in the proposed model, which motivates the model to utilize more motion features. However, the feature sequences generated by strong data augmentation make it difficult to maintain identity of the original sequence. Hence, we introduce a dual distributional divergence minimization method, proposing a multi-scale motion loss function. It utilizes the embedding distribution of the ordinary augmentation branch to supervise the loss computation of the strong augmentation branch. Finally, the proposed method is evaluated on NTU RGB+D 60, NTU RGB+D 120, and PKU-MMD datasets. The accuracy of our method is 1.4–3.0% higher than the frontier models.

人们通过动作来处理事物和表达情感,动作识别已被广泛研究,但还未得到充分探索。传统的基于自监督骨架的动作识别只关注关节点特征,忽略了不同尺度身体结构的内在语义信息。为解决这一问题,我们提出了多尺度视觉表征对比学习(MsMCLR)模型。该模型利用多尺度运动注意(Multi-scale Motion Attention,MsM Attention)模块将骨骼特征分为三个尺度级别,从中提取跨帧和跨节点的运动特征。为了获得更多的运动模式,提议的模型采用了强数据增强的组合,这促使模型利用更多的运动特征。然而,通过强数据增强生成的特征序列很难保持原始序列的一致性。因此,我们引入了双分布发散最小化方法,提出了多尺度运动损失函数。它利用普通增强分支的嵌入分布来监督强增强分支的损失计算。最后,在 NTU RGB+D 60、NTU RGB+D 120 和 PKU-MMD 数据集上对所提出的方法进行了评估。我们方法的准确率比前沿模型高 1.4-3.0%。
{"title":"Multi-scale motion contrastive learning for self-supervised skeleton-based action recognition","authors":"Yushan Wu, Zengmin Xu, Mengwei Yuan, Tianchi Tang, Ruxing Meng, Zhongyuan Wang","doi":"10.1007/s00530-024-01463-0","DOIUrl":"https://doi.org/10.1007/s00530-024-01463-0","url":null,"abstract":"<p>People process things and express feelings through actions, action recognition has been able to be widely studied, yet under-explored. Traditional self-supervised skeleton-based action recognition focus on joint point features, ignoring the inherent semantic information of body structures at different scales. To address this problem, we propose a multi-scale Motion Contrastive Learning of Visual Representations (MsMCLR) model. The model utilizes the Multi-scale Motion Attention (MsM Attention) module to divide the skeletal features into three scale levels, extracting cross-frame and cross-node motion features from them. To obtain more motion patterns, a combination of strong data augmentation is used in the proposed model, which motivates the model to utilize more motion features. However, the feature sequences generated by strong data augmentation make it difficult to maintain identity of the original sequence. Hence, we introduce a dual distributional divergence minimization method, proposing a multi-scale motion loss function. It utilizes the embedding distribution of the ordinary augmentation branch to supervise the loss computation of the strong augmentation branch. Finally, the proposed method is evaluated on NTU RGB+D 60, NTU RGB+D 120, and PKU-MMD datasets. The accuracy of our method is 1.4–3.0% higher than the frontier models.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"83 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C2IENet: Multi-branch medical image fusion based on contrastive constraint features and information exchange C2IENet:基于对比约束特征和信息交换的多分支医学影像融合
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-09 DOI: 10.1007/s00530-024-01473-y
Jing Di, Chan Liang, Li Ren, Wenqing Guo, Jizhao Liu, Jing Lian

In the field of medical image fusion, traditional approaches often fail to differentiate between the unique characteristics of each raw image, leading to fused images with compromised texture and structural clarity. Addressing this, we introduce an advanced multi-branch fusion method characterized by contrast-enhanced features and interactive information exchange. This method integrates a multi-scale residual module and a gradient-dense module within a private branch to precisely extract and enrich texture details from individual raw images. In parallel, a common feature extraction branch, equipped with an information interaction module, processes paired raw images to synergistically capture complementary and shared functional information across modalities. Additionally, we implement a sophisticated attention mechanism tailored for both the private and public branches to enhance global feature extraction, thereby significantly improving the contrast and contour definition of the fused image. A novel correlation consistency loss function further refines the fusion process by optimizing the information sharing between modalities, promoting the correlation among basic cross-modal features while minimizing the correlation of high-frequency details across different modalities. Objective evaluations demonstrate substantial improvements in indices such as EN, MI, QMI, SSIM, AG, SF, and (text {Q}^{text {AB/F}}), with average increases of 23.67%, 12.35%, 4.22%, 20.81%, 8.96%, 6.38%, and 25.36%, respectively. These results underscore our method’s superiority in achieving enhanced texture detail and contrast in fused images compared to conventional algorithms, as validated by both subjective assessments and objective performance metrics.

在医学图像融合领域,传统方法往往无法区分每张原始图像的独特特征,导致融合后的图像纹理和结构清晰度大打折扣。针对这一问题,我们引入了一种先进的多分支融合方法,其特点是对比度增强特征和交互式信息交换。该方法将多尺度残差模块和梯度密集模块整合到一个专用分支中,以精确提取和丰富单个原始图像的纹理细节。与此同时,配备了信息交互模块的通用特征提取分支会处理配对的原始图像,以协同捕捉跨模态的互补和共享功能信息。此外,我们还为私人和公共分支量身定制了一种复杂的关注机制,以加强全局特征提取,从而显著改善融合图像的对比度和轮廓定义。新颖的相关一致性损失函数通过优化各模态之间的信息共享,进一步完善了融合过程,促进了跨模态基本特征之间的相关性,同时最大限度地降低了不同模态之间高频细节的相关性。客观评估表明,EN、MI、QMI、SSIM、AG、SF 和 (text {Q}^{text {AB/F}}/)等指数有了显著改善,平均增幅分别为 23.67%、12.35%、4.22%、20.81%、8.96%、6.38% 和 25.36%。这些结果表明,与传统算法相比,我们的方法在增强融合图像的纹理细节和对比度方面更具优势,主观评估和客观性能指标都验证了这一点。
{"title":"C2IENet: Multi-branch medical image fusion based on contrastive constraint features and information exchange","authors":"Jing Di, Chan Liang, Li Ren, Wenqing Guo, Jizhao Liu, Jing Lian","doi":"10.1007/s00530-024-01473-y","DOIUrl":"https://doi.org/10.1007/s00530-024-01473-y","url":null,"abstract":"<p>In the field of medical image fusion, traditional approaches often fail to differentiate between the unique characteristics of each raw image, leading to fused images with compromised texture and structural clarity. Addressing this, we introduce an advanced multi-branch fusion method characterized by contrast-enhanced features and interactive information exchange. This method integrates a multi-scale residual module and a gradient-dense module within a private branch to precisely extract and enrich texture details from individual raw images. In parallel, a common feature extraction branch, equipped with an information interaction module, processes paired raw images to synergistically capture complementary and shared functional information across modalities. Additionally, we implement a sophisticated attention mechanism tailored for both the private and public branches to enhance global feature extraction, thereby significantly improving the contrast and contour definition of the fused image. A novel correlation consistency loss function further refines the fusion process by optimizing the information sharing between modalities, promoting the correlation among basic cross-modal features while minimizing the correlation of high-frequency details across different modalities. Objective evaluations demonstrate substantial improvements in indices such as EN, MI, QMI, SSIM, AG, SF, and <span>(text {Q}^{text {AB/F}})</span>, with average increases of 23.67%, 12.35%, 4.22%, 20.81%, 8.96%, 6.38%, and 25.36%, respectively. These results underscore our method’s superiority in achieving enhanced texture detail and contrast in fused images compared to conventional algorithms, as validated by both subjective assessments and objective performance metrics.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"19 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLR-MVSNet: a lightweight network for low-texture scene reconstruction LLR-MVSNet:用于低纹理场景重建的轻量级网络
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-09 DOI: 10.1007/s00530-024-01464-z
Lina Wang, Jiangfeng She, Qiang Zhao, Xiang Wen, Qifeng Wan, Shuangpin Wu

In recent years, learning-based MVS methods have achieved excellent performance compared with traditional methods. However, these methods still have notable shortcomings, such as the low efficiency of traditional convolutional networks and simple feature fusion, which lead to incomplete reconstruction. In this research, we propose a lightweight network for low-texture scene reconstruction (LLR-MVSNet). To improve accuracy and efficiency, a lightweight network is proposed, including a multi-scale feature extraction module and a weighted feature fusion module. The multi-scale feature extraction module uses depth-separable convolution and point-wise convolution to replace traditional convolution, which can reduce network parameters and improve the model efficiency. In order to improve the fusion accuracy, a weighted feature fusion module is proposed, which can selectively emphasize features, suppress useless information and improve the fusion accuracy. With rapid computational speed and high performance, our method surpasses the state-of-the-art benchmarks and performs well on the DTU and the Tanks & Temples datasets. The code of our method will be made available at https://github.com/wln19/LLR-MVSNet.

近年来,与传统方法相比,基于学习的 MVS 方法取得了优异的性能。然而,这些方法仍然存在明显的不足,如传统卷积网络的低效率和简单的特征融合,导致重建不完整。在这项研究中,我们提出了一种用于低纹理场景重建的轻量级网络(LLR-MVSNet)。为了提高准确性和效率,我们提出了一种轻量级网络,包括一个多尺度特征提取模块和一个加权特征融合模块。多尺度特征提取模块使用深度分离卷积和点卷积取代传统卷积,可以减少网络参数,提高模型效率。为了提高融合精度,提出了加权特征融合模块,可以有选择地强调特征,抑制无用信息,提高融合精度。我们的方法计算速度快、性能高,超越了最先进的基准,在 DTU 和 Tanks & Temples 数据集上表现出色。我们的方法代码将公布在 https://github.com/wln19/LLR-MVSNet 网站上。
{"title":"LLR-MVSNet: a lightweight network for low-texture scene reconstruction","authors":"Lina Wang, Jiangfeng She, Qiang Zhao, Xiang Wen, Qifeng Wan, Shuangpin Wu","doi":"10.1007/s00530-024-01464-z","DOIUrl":"https://doi.org/10.1007/s00530-024-01464-z","url":null,"abstract":"<p>In recent years, learning-based MVS methods have achieved excellent performance compared with traditional methods. However, these methods still have notable shortcomings, such as the low efficiency of traditional convolutional networks and simple feature fusion, which lead to incomplete reconstruction. In this research, we propose a lightweight network for low-texture scene reconstruction (LLR-MVSNet). To improve accuracy and efficiency, a lightweight network is proposed, including a multi-scale feature extraction module and a weighted feature fusion module. The multi-scale feature extraction module uses depth-separable convolution and point-wise convolution to replace traditional convolution, which can reduce network parameters and improve the model efficiency. In order to improve the fusion accuracy, a weighted feature fusion module is proposed, which can selectively emphasize features, suppress useless information and improve the fusion accuracy. With rapid computational speed and high performance, our method surpasses the state-of-the-art benchmarks and performs well on the DTU and the Tanks &amp; Temples datasets. The code of our method will be made available at https://github.com/wln19/LLR-MVSNet.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"114 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multimedia Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1