首页 > 最新文献

Image and Vision Computing最新文献

英文 中文
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105912"},"PeriodicalIF":4.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146237834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105916"},"PeriodicalIF":4.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146237837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105893"},"PeriodicalIF":4.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146237840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105900"},"PeriodicalIF":4.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146237844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105897"},"PeriodicalIF":4.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146237845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105888"},"PeriodicalIF":4.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146237852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105889"},"PeriodicalIF":4.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146237856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRM-YOLO: A YOLOv11-based structural optimization method for small object detection in UAV aerial imagery DRM-YOLO:一种基于yolov11的无人机航拍小目标检测结构优化方法
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-30 DOI: 10.1016/j.imavis.2025.105894
Hongbo Bi, Rui Dai, Fengyang Han, Cong Zhang
With the falling cost of UAVs and advances in automation, drones are increasingly applied in agriculture, inspection, and smart cities. However, small object detection remains difficult due to tiny targets, sparse features, and complex backgrounds. To tackle these challenges, this paper presents an improved small object detection framework for UAV imagery, optimized from the YOLOv11n architecture. First, the proposed MetaDWBlock integrates multi-branch depthwise separable convolutions with a lightweight MLP, and its hierarchical MetaDWStage enhances contextual and fine-grained feature modeling. Second, the Cross-scale Feature Fusion Module (CFFM) employs the CARAFE upsampling operator for precise fusion of shallow spatial and deep semantic features, improving multi-scale perception. Finally, a scale-, spatial-, and task-aware Dynamic Head with an added P2 branch forms a four-branch detection head, markedly boosting detection accuracy for tiny objects. Experimental results on the VisDrone2019 dataset demonstrate that the proposed DRM-YOLO model significantly outperforms the baseline YOLOv11n in small object detection tasks, achieving a 21.4% improvement in [email protected] and a 13.1% improvement in [email protected]. These results fully validate the effectiveness and practical value of the proposed method in enhancing the accuracy and robustness of small object detection in UAV aerial imagery. The code and results of our method are available at https://github.com/DRdairuiDR/DRM--YOLO.
随着无人机成本的下降和自动化程度的提高,无人机在农业、检查、智慧城市等领域的应用越来越多。然而,由于目标微小、特征稀疏、背景复杂等原因,小目标检测仍然是一个难题。为了解决这些挑战,本文提出了一种改进的无人机图像小目标检测框架,该框架在YOLOv11n架构的基础上进行了优化。首先,提出的MetaDWBlock将多分支深度可分离卷积与轻量级MLP集成在一起,其分层MetaDWStage增强了上下文和细粒度特征建模。其次,跨尺度特征融合模块(CFFM)采用CARAFE上采样算子对浅层空间和深层语义特征进行精确融合,提高多尺度感知能力。最后,一个具有尺度、空间和任务感知的动态头与一个额外的P2分支形成了一个四分支检测头,显著提高了对微小物体的检测精度。在VisDrone2019数据集上的实验结果表明,提出的DRM-YOLO模型在小目标检测任务中显著优于基线YOLOv11n,在[email protected]中提高了21.4%,在[email protected]中提高了13.1%。这些结果充分验证了该方法在提高无人机航测图像小目标检测精度和鲁棒性方面的有效性和实用价值。我们的方法的代码和结果可在https://github.com/DRdairuiDR/DRM--YOLO上获得。
{"title":"DRM-YOLO: A YOLOv11-based structural optimization method for small object detection in UAV aerial imagery","authors":"Hongbo Bi,&nbsp;Rui Dai,&nbsp;Fengyang Han,&nbsp;Cong Zhang","doi":"10.1016/j.imavis.2025.105894","DOIUrl":"10.1016/j.imavis.2025.105894","url":null,"abstract":"<div><div>With the falling cost of UAVs and advances in automation, drones are increasingly applied in agriculture, inspection, and smart cities. However, small object detection remains difficult due to tiny targets, sparse features, and complex backgrounds. To tackle these challenges, this paper presents an improved small object detection framework for UAV imagery, optimized from the YOLOv11n architecture. First, the proposed MetaDWBlock integrates multi-branch depthwise separable convolutions with a lightweight MLP, and its hierarchical MetaDWStage enhances contextual and fine-grained feature modeling. Second, the Cross-scale Feature Fusion Module (CFFM) employs the CARAFE upsampling operator for precise fusion of shallow spatial and deep semantic features, improving multi-scale perception. Finally, a scale-, spatial-, and task-aware Dynamic Head with an added P2 branch forms a four-branch detection head, markedly boosting detection accuracy for tiny objects. Experimental results on the VisDrone2019 dataset demonstrate that the proposed DRM-YOLO model significantly outperforms the baseline YOLOv11n in small object detection tasks, achieving a 21.4% improvement in [email protected] and a 13.1% improvement in [email protected]. These results fully validate the effectiveness and practical value of the proposed method in enhancing the accuracy and robustness of small object detection in UAV aerial imagery. The code and results of our method are available at <span><span>https://github.com/DRdairuiDR/DRM--YOLO</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105894"},"PeriodicalIF":4.2,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-stage network combining transformer and hybrid convolutions for stereo image super-resolution 结合变压器和混合卷积的双级网络立体图像超分辨率
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-29 DOI: 10.1016/j.imavis.2025.105892
Jintao Zeng , Aiwen Jiang , Feiqiang Liu
Stereo image super-resolution aims to recover high-resolution image from given low-resolution left and right view images. Its challenges lie in fully feature extraction on each perspective and skillfully information integration from different perspectives. Among current methods, almost all super-resolution models employ single-stage strategy either based on transformer or convolution neural network(CNN). For highly nonlinear problems, single-stage network may not achieve very ideal performance with acceptable complexity. In this paper, we have proposed a dual-stage stereo image super-resolution network (DSSRNet) which integrates the complementary advantages of transformer and convolutions. Specifically, we design cross-stage attention module (CASM) to bridge informative feature transmission between successive stages. Moreover, we utilize fourier convolutions to efficiently model global and local features, benefiting restoring image details and texture. We have compared the proposed DSSRNet with several state-of-the-art methods on public benchmark datasets. The comprehensive experiments demonstrate that DSSRNet can restore clear structural features and richer texture details, achieving leading performance on PSNR, SSIM and LPIPS metrics with acceptable computation burden in stereo image super-resolution field. Related source codes and models will be released on https://github.com/Zjtao-lab/DSSRNet.
立体图像超分辨率旨在从给定的低分辨率左右视图图像中恢复高分辨率图像。其难点在于如何从各个角度充分提取特征,如何巧妙地将不同角度的信息进行整合。在现有的方法中,几乎所有的超分辨率模型都采用基于变压器或卷积神经网络(CNN)的单级策略。对于高度非线性问题,单级网络可能无法在可接受的复杂度下获得非常理想的性能。在本文中,我们提出了一种双级立体图像超分辨率网络(DSSRNet),它融合了变压器和卷积的互补优势。具体而言,我们设计了跨阶段注意模块(CASM),以架起信息特征在连续阶段之间传递的桥梁。此外,我们利用傅里叶卷积有效地建模全局和局部特征,有利于恢复图像的细节和纹理。我们将提出的DSSRNet与几种最先进的方法在公共基准数据集上进行了比较。综合实验表明,DSSRNet可以恢复清晰的结构特征和更丰富的纹理细节,在立体图像超分辨率领域的PSNR、SSIM和LPIPS指标上取得领先的性能,且计算负担可接受。相关源代码和模型将在https://github.com/Zjtao-lab/DSSRNet上发布。
{"title":"Dual-stage network combining transformer and hybrid convolutions for stereo image super-resolution","authors":"Jintao Zeng ,&nbsp;Aiwen Jiang ,&nbsp;Feiqiang Liu","doi":"10.1016/j.imavis.2025.105892","DOIUrl":"10.1016/j.imavis.2025.105892","url":null,"abstract":"<div><div>Stereo image super-resolution aims to recover high-resolution image from given low-resolution left and right view images. Its challenges lie in fully feature extraction on each perspective and skillfully information integration from different perspectives. Among current methods, almost all super-resolution models employ single-stage strategy either based on transformer or convolution neural network(CNN). For highly nonlinear problems, single-stage network may not achieve very ideal performance with acceptable complexity. In this paper, we have proposed a dual-stage stereo image super-resolution network (DSSRNet) which integrates the complementary advantages of transformer and convolutions. Specifically, we design cross-stage attention module (CASM) to bridge informative feature transmission between successive stages. Moreover, we utilize fourier convolutions to efficiently model global and local features, benefiting restoring image details and texture. We have compared the proposed DSSRNet with several state-of-the-art methods on public benchmark datasets. The comprehensive experiments demonstrate that DSSRNet can restore clear structural features and richer texture details, achieving leading performance on PSNR, SSIM and LPIPS metrics with acceptable computation burden in stereo image super-resolution field. Related source codes and models will be released on <span><span>https://github.com/Zjtao-lab/DSSRNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105892"},"PeriodicalIF":4.2,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistic temporal checking and spatial consistency based 3D size reconstruction of multiple objects from indoor monocular videos 基于统计时间检验和空间一致性的室内单目视频多目标三维尺寸重建
IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-27 DOI: 10.1016/j.imavis.2025.105890
Ziyue Wang , Xina Cheng , Takeshi Ikenaga
Reconstructing accurate 3D sizes of multiple objects from indoor monocular videos has gradually become a significant topic for robotics, smart homes, and wireless signal analysis. However, existing monocular reconstruction pipelines often focus on the surface or 3D bounding box reconstruction of objects, making unreliable size estimation due to occlusion, missing depth, and incomplete visibility. To accurately reconstruct the real size of objects in different shapes under complex indoor conditions, this work proposes statistic checking module with depth layering and spatial consistency checking for accurate object size reconstruction. First, by checking the frequency of feature points from the semantic information, statistic temporal checking is used to remove outliers around object region by checking the probability of foreground and background region. Secondly, depth layering provides depth prior, which helps to enhance the boundary of objects and increases the 3D reconstruction accuracy. Then, a semantic-guided spatial consistency checking module infers the hidden or occluded parts of objects by exploiting category-specific priors and spatial consistency. The inferred complete object boundaries are enclosed using surface fitting and volumetric filling, resulting in final volumetric occupancy estimates for each individual object. Extensive experiments demonstrate that the proposed method achieves 0.3137 error rate, which is approximately 0.5641 lower than the average.
从室内单目视频中重建多个物体的精确3D尺寸已逐渐成为机器人、智能家居和无线信号分析领域的重要课题。然而,现有的单目重建管道往往侧重于物体的表面或三维边界盒重建,由于遮挡、缺失深度和不完全可见性,导致尺寸估计不可靠。为了在复杂室内条件下准确重建不同形状物体的真实尺寸,本文提出了具有深度分层和空间一致性检查的统计检查模块,用于精确重建物体尺寸。首先,从语义信息中检测特征点的频率,通过检测前景和背景区域的概率,采用统计时态检测去除目标区域周围的离群点;其次,深度分层提供了深度先验,有助于增强物体的边界,提高三维重建精度。然后,一个语义引导的空间一致性检查模块通过利用特定类别的先验和空间一致性推断出对象的隐藏或遮挡部分。推断出的完整物体边界使用表面拟合和体积填充来封闭,从而得到每个单独物体的最终体积占用估计。大量实验表明,该方法的错误率为0.3137,比平均值低约0.5641。
{"title":"Statistic temporal checking and spatial consistency based 3D size reconstruction of multiple objects from indoor monocular videos","authors":"Ziyue Wang ,&nbsp;Xina Cheng ,&nbsp;Takeshi Ikenaga","doi":"10.1016/j.imavis.2025.105890","DOIUrl":"10.1016/j.imavis.2025.105890","url":null,"abstract":"<div><div>Reconstructing accurate 3D sizes of multiple objects from indoor monocular videos has gradually become a significant topic for robotics, smart homes, and wireless signal analysis. However, existing monocular reconstruction pipelines often focus on the surface or 3D bounding box reconstruction of objects, making unreliable size estimation due to occlusion, missing depth, and incomplete visibility. To accurately reconstruct the real size of objects in different shapes under complex indoor conditions, this work proposes statistic checking module with depth layering and spatial consistency checking for accurate object size reconstruction. First, by checking the frequency of feature points from the semantic information, statistic temporal checking is used to remove outliers around object region by checking the probability of foreground and background region. Secondly, depth layering provides depth prior, which helps to enhance the boundary of objects and increases the 3D reconstruction accuracy. Then, a semantic-guided spatial consistency checking module infers the hidden or occluded parts of objects by exploiting category-specific priors and spatial consistency. The inferred complete object boundaries are enclosed using surface fitting and volumetric filling, resulting in final volumetric occupancy estimates for each individual object. Extensive experiments demonstrate that the proposed method achieves 0.3137 error rate, which is approximately 0.5641 lower than the average.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"167 ","pages":"Article 105890"},"PeriodicalIF":4.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Image and Vision Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1