首页 > 最新文献

International Journal of Computer Vision最新文献

英文 中文
FurniScene: A Large-scale 3D Room Dataset with Intricate Furnishing Scenes FurniScene:一个具有复杂家具场景的大规模3D房间数据集
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-21 DOI: 10.1007/s11263-025-02634-w
Yuxi Wang, Junran Peng, Genghao Zhang, Chuanchen Luo, Shibiao Xu, Man Zhang, Zhaoxiang Zhang
{"title":"FurniScene: A Large-scale 3D Room Dataset with Intricate Furnishing Scenes","authors":"Yuxi Wang, Junran Peng, Genghao Zhang, Chuanchen Luo, Shibiao Xu, Man Zhang, Zhaoxiang Zhang","doi":"10.1007/s11263-025-02634-w","DOIUrl":"https://doi.org/10.1007/s11263-025-02634-w","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"1 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion-Based Data Augmentation for Image Recognition: A Systematic Analysis and Evaluation 基于扩散的图像识别数据增强:系统分析与评价
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-21 DOI: 10.1007/s11263-026-02754-x
Zekun Li, Yinghuan Shi, Yang Gao, Dong Xu
{"title":"Diffusion-Based Data Augmentation for Image Recognition: A Systematic Analysis and Evaluation","authors":"Zekun Li, Yinghuan Shi, Yang Gao, Dong Xu","doi":"10.1007/s11263-026-02754-x","DOIUrl":"https://doi.org/10.1007/s11263-026-02754-x","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"5 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Effective-Efficient Approach for Dense Multi-Label Action Detection 一种高效的密集多标签动作检测方法
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-21 DOI: 10.1007/s11263-026-02738-x
Faegheh Sardari, Armin Mustafa, Philip J. B. Jackson, Adrian Hilton
Unlike the sparse label action detection task, where a single action occurs in each timestamp of a video, in a dense multi-label scenario, actions can overlap temporally. To address this challenging task, it is necessary to simultaneously learn (i) co-occurrence action relationships and (ii) temporal dependencies. Current methods model co-occurrence action relationships by explicitly embedding class relations into the transformer network architecture. However, these approaches are not computationally efficient, as the network needs to compute all possible pair action class relations. In this paper, we overcome this by introducing a novel framework trained through a novel learning paradigm that allows the network to benefit from explicitly modelling temporal co-occurrence action dependencies during training without imposing their computational overhead during inference. Furthermore, to model temporal information, recent approaches extract multi-scale temporal features through hierarchical transformer-based networks. However, the self-attention mechanism in transformers inherently loses temporal positional information. We argue that combining this with multiple sub-sampling processes in hierarchical designs can lead to further loss of positional information. Preserving this information is essential for accurate action detection. In this paper, we address this issue by proposing a novel transformer network that (a) employs a non-hierarchical structure when modelling different ranges of temporal dependencies and (b) embeds relative positional encoding in its transformer layers. We evaluate the performance of our proposed approach on two challenging dense multi-label benchmark datasets and show that our method improves the current state-of-the-art results by 1.1% and 0.6% per-frame mAP on the Charades and MultiTHUMOS datasets, respectively, achieving new state-of-the-art per-frame mAP results at 26.5% and 44.6%, respectively. We also performed extensive ablation studies to examine the impact of the different components of our proposed approach. Our code will be released upon paper publication
与稀疏标签动作检测任务不同,在视频的每个时间戳中都会出现单个动作,而在密集的多标签场景中,动作可以在时间上重叠。为了解决这一具有挑战性的任务,有必要同时学习(i)共现行为关系和(ii)时间依赖性。当前的方法通过显式地将类关系嵌入到变压器网络体系结构中来建模共现动作关系。然而,这些方法的计算效率不高,因为网络需要计算所有可能的配对动作类关系。在本文中,我们通过引入一个通过新的学习范式训练的新框架来克服这个问题,该框架允许网络在训练期间从显式建模时间共现动作依赖中受益,而不会在推理期间强加它们的计算开销。此外,为了建模时间信息,最近的方法是通过基于分层变压器的网络提取多尺度时间特征。然而,变压器的自注意机制固有地丢失了时间位置信息。我们认为,将此与分层设计中的多个子采样过程相结合可能导致位置信息的进一步丢失。保存这些信息对于准确的动作检测至关重要。在本文中,我们通过提出一种新的变压器网络来解决这个问题,该网络(a)在建模不同范围的时间依赖性时采用非分层结构,(b)在其变压器层中嵌入相对位置编码。我们在两个具有挑战性的密集多标签基准数据集上评估了我们提出的方法的性能,并表明我们的方法在Charades和MultiTHUMOS数据集上的每帧mAP分别提高了当前最先进的结果1.1%和0.6%,实现了新的最先进的每帧mAP结果分别为26.5%和44.6%。我们还进行了广泛的消融研究,以检查我们建议的方法的不同组成部分的影响。我们的代码将在论文发表后发布
{"title":"An Effective-Efficient Approach for Dense Multi-Label Action Detection","authors":"Faegheh Sardari, Armin Mustafa, Philip J. B. Jackson, Adrian Hilton","doi":"10.1007/s11263-026-02738-x","DOIUrl":"https://doi.org/10.1007/s11263-026-02738-x","url":null,"abstract":"Unlike the sparse label action detection task, where a single action occurs in each timestamp of a video, in a dense multi-label scenario, actions can overlap temporally. To address this challenging task, it is necessary to simultaneously learn (i) co-occurrence action relationships and (ii) temporal dependencies. Current methods model co-occurrence action relationships by explicitly embedding class relations into the transformer network architecture. However, these approaches are not computationally efficient, as the network needs to compute all possible pair action class relations. In this paper, we overcome this by introducing a novel framework trained through a novel learning paradigm that allows the network to benefit from explicitly modelling temporal co-occurrence action dependencies during training without imposing their computational overhead during inference. Furthermore, to model temporal information, recent approaches extract multi-scale temporal features through hierarchical transformer-based networks. However, the self-attention mechanism in transformers inherently loses temporal positional information. We argue that combining this with multiple sub-sampling processes in hierarchical designs can lead to further loss of positional information. Preserving this information is essential for accurate action detection. In this paper, we address this issue by proposing a novel transformer network that (a) employs a non-hierarchical structure when modelling different ranges of temporal dependencies and (b) embeds relative positional encoding in its transformer layers. We evaluate the performance of our proposed approach on two challenging dense multi-label benchmark datasets and show that our method improves the current state-of-the-art results by 1.1% and 0.6% per-frame mAP on the Charades and MultiTHUMOS datasets, respectively, achieving new state-of-the-art per-frame mAP results at 26.5% and 44.6%, respectively. We also performed extensive ablation studies to examine the impact of the different components of our proposed approach. <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"https://github.com/faeghehsardari/E-E-IJCV\" ext-link-type=\"uri\"> <jats:underline>Our code will be released upon paper publication</jats:underline> </jats:ext-link>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"21 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Not All Attention is Needed: Parameter and Computation Efficient Tuning for Multi-modal Large Language Models via Effective Attention Skipping 不是所有的注意力都需要:多模态大语言模型的参数和计算效率调优,通过有效的注意力跳过
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-21 DOI: 10.1007/s11263-025-02702-1
Qiong Wu, Yiyi Zhou, Weihao Ye, Xiaoshuai Sun, Rongrong Ji
{"title":"Not All Attention is Needed: Parameter and Computation Efficient Tuning for Multi-modal Large Language Models via Effective Attention Skipping","authors":"Qiong Wu, Yiyi Zhou, Weihao Ye, Xiaoshuai Sun, Rongrong Ji","doi":"10.1007/s11263-025-02702-1","DOIUrl":"https://doi.org/10.1007/s11263-025-02702-1","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"43 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Polynomial Formula for the Perspective Four Points Problem 透视四点问题的一个多项式公式
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-21 DOI: 10.1007/s11263-025-02660-8
David Lehavi, Brian Osserman
{"title":"A Polynomial Formula for the Perspective Four Points Problem","authors":"David Lehavi, Brian Osserman","doi":"10.1007/s11263-025-02660-8","DOIUrl":"https://doi.org/10.1007/s11263-025-02660-8","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"42 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Follow-Your-Emoji-Faster: Towards Efficient, Fine-Controllable, and Expressive Freestyle Portrait Animation Follow-Your-Emoji-Faster:迈向高效、可控和富有表现力的自由式肖像动画
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-21 DOI: 10.1007/s11263-025-02685-z
Yue Ma, Zexuan Yan, Hongyu Liu, Hongfa Wang, Heng Pan, Yingqing He, Junkun Yuan, Ailing Zeng, Chengfei Cai, Heung-Yeung Shum, Zhifeng Li, Wei Liu, Linfeng Zhang, Qifeng Chen
{"title":"Follow-Your-Emoji-Faster: Towards Efficient, Fine-Controllable, and Expressive Freestyle Portrait Animation","authors":"Yue Ma, Zexuan Yan, Hongyu Liu, Hongfa Wang, Heng Pan, Yingqing He, Junkun Yuan, Ailing Zeng, Chengfei Cai, Heung-Yeung Shum, Zhifeng Li, Wei Liu, Linfeng Zhang, Qifeng Chen","doi":"10.1007/s11263-025-02685-z","DOIUrl":"https://doi.org/10.1007/s11263-025-02685-z","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"79 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CurvLoc: Surface Curvature Prompted Gaussian Splatting for Visual Localization 曲面曲率高斯溅射的视觉定位
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-21 DOI: 10.1007/s11263-026-02748-9
Hang Li, Jiawei Zhang, Jiahe Li, Botao Jiang, Zihang Wang, Xiaohan Yu, Jin Zheng, Xiao Bai, Haonan Luo
{"title":"CurvLoc: Surface Curvature Prompted Gaussian Splatting for Visual Localization","authors":"Hang Li, Jiawei Zhang, Jiahe Li, Botao Jiang, Zihang Wang, Xiaohan Yu, Jin Zheng, Xiao Bai, Haonan Luo","doi":"10.1007/s11263-026-02748-9","DOIUrl":"https://doi.org/10.1007/s11263-026-02748-9","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"15 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey of Multimodal Hallucination Evaluation and Detection 多模态幻觉评价与检测综述
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-21 DOI: 10.1007/s11263-026-02756-9
Zhiyuan Chen, Yuecong Min, Jie Zhang, Bei Yan, Jiahao Wang, Xiaozhen Wang, Shiguang Shan
{"title":"A Survey of Multimodal Hallucination Evaluation and Detection","authors":"Zhiyuan Chen, Yuecong Min, Jie Zhang, Bei Yan, Jiahao Wang, Xiaozhen Wang, Shiguang Shan","doi":"10.1007/s11263-026-02756-9","DOIUrl":"https://doi.org/10.1007/s11263-026-02756-9","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"15 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
B$$^{3}$$CT: Three-Branch Learning with Unlabeled Target Signals for Domain-Robust Semantic Segmentation B $$^{3}$$ CT:基于无标记目标信号的三分支学习的领域鲁棒语义分割
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-18 DOI: 10.1007/s11263-026-02782-7
Chen Liang, Xin Zhao, Jian Jia, Junyan Wang, Lijun Cao, Jianguo Zhang, Weihua Chen
{"title":"B$$^{3}$$CT: Three-Branch Learning with Unlabeled Target Signals for Domain-Robust Semantic Segmentation","authors":"Chen Liang, Xin Zhao, Jian Jia, Junyan Wang, Lijun Cao, Jianguo Zhang, Weihua Chen","doi":"10.1007/s11263-026-02782-7","DOIUrl":"https://doi.org/10.1007/s11263-026-02782-7","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"1 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146230872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic-Centric Alignment for Zero-shot Panoptic Segmentation with Limited Data 有限数据下零镜头全视分割的语义中心对齐
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-16 DOI: 10.1007/s11263-025-02648-4
Jialei Chen, Daisuke Deguchi, Dongyue Li, Xu Zheng, Seigo Ito, Hiroshi Murase, Qi Fan
{"title":"Semantic-Centric Alignment for Zero-shot Panoptic Segmentation with Limited Data","authors":"Jialei Chen, Daisuke Deguchi, Dongyue Li, Xu Zheng, Seigo Ito, Hiroshi Murase, Qi Fan","doi":"10.1007/s11263-025-02648-4","DOIUrl":"https://doi.org/10.1007/s11263-025-02648-4","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"37 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146205038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Vision
全部 Basin Res. Chin. J. Phys. Engineering Science and Technology, an International Journal [Sanfujinka chiryo] Obstetrical and gynecological therapy Mon. Weather Rev. J QUANT SPECTROSC RA 2012 International Electron Devices Meeting AAPG Bull. Espacio Tiempo y Forma. Serie VI, Geografía Atmos. Chem. Phys. ENG SANIT AMBIENT ACTA PETROL SIN ATMOSPHERE-BASEL CRANIO "Laboratorio;" analisis clinicos, bacteriologia, inmunologia, parasitologia, hematologia, anatomia patologica, quimica clinica Classical Quantum Gravity Ecol. Monogr. Opt. Express Hydrol. Processes Hydrogeol. J. Environ. Pollut. Bioavailability Addict. Behav. INT J MOD PHYS B 2012 IEEE/ACM Sixth International Symposium on Networks-on-Chip Acta Geophys. INFRARED PHYS TECHN 2009 Symposium on Photonics and Optoelectronics Adv. Atmos. Sci. 2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT) 2007 IEEE Ultrasonics Symposium Proceedings Clean Technol. Environ. Policy Mineral. Mag. ECOL RESTOR ArcheoSci.-Rev. Archeom. J. Earth Syst. Sci. 2011 International Conference on Infrared, Millimeter, and Terahertz Waves Essentials of Polymer Flooding Technique Environ. Eng. Manage. J. Chem. Ecol. J PHYS B-AT MOL OPT Geostand. Geoanal. Res. 液晶与显示 ACTA CIR BRAS Revista Espanola de Medicina Legal 2011 IEEE 2nd International Conference on Computing, Control and Industrial Engineering 2012 International Conference on Biomedical Engineering and Biotechnology Equine veterinary journal. Supplement Big Earth Data J. Opt. GEOL BELG ARCT ANTARCT ALP RES Carbon Balance Manage. J. Hydrol. Am. Mineral. Atmos. Meas. Tech. Ann. Glaciol. Acta Oceanolog. Sin. Contrib. Mineral. Petrol. Commun. Phys. CHIN OPT LETT Geobiology ARCHAEOMETRY RADIOCARBON Annu. Rev. Earth Planet. Sci. ASTRON ASTROPHYS APL Photonics ARCH ACOUST Acta Geochimica ICARUS Am. J. Phys. Anthropol. Geochim. Cosmochim. Acta Chin. Phys. Lett. 2010 International Conference on Challenges in Environmental Science and Computer Engineering Int. J. Earth Sci. Ecol. Eng. Energy Storage Atmos. Res. Environ. Eng. Res. Clean-Soil Air Water Astrophys. Space Sci. ECOSYSTEMS ASTROBIOLOGY Asia-Pac. J. Atmos. Sci. Transactions of the American Neurological Association 2013 IEEE MTT-S International Microwave Workshop Series on RF and Wireless Technologies for Biomedical and Healthcare Applications (IMWS-BIO) Int. J. Biometeorol. Environ. Prot. Eng. IZV-PHYS SOLID EART+ Am. J. Sci. Ecol. Res. Adv. Meteorol. J. Atmos. Chem. Conserv. Genet. Resour. Org. Geochem. Aust. J. Earth Sci. [Hokkaido igaku zasshi] The Hokkaido journal of medical science Appl. Geochem. Environ. Toxicol. Pharmacol. ACTA GEOL POL Clim. Change
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1