首页 > 最新文献

International Journal of Computer Vision最新文献

英文 中文
Exploiting Unlabeled Data with Multiple Expert Teachers for Open Vocabulary Aerial Object Detection and Its Orientation Adaptation 基于多专家教师的开放词汇空中目标检测及其方向适应研究
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-026-02733-2
Yan Li, Weiwei Guo, Xue Yang, Ning Liao, Shaofeng Zhang, Yi Yu, Wenxian Yu, Junchi Yan
{"title":"Exploiting Unlabeled Data with Multiple Expert Teachers for Open Vocabulary Aerial Object Detection and Its Orientation Adaptation","authors":"Yan Li, Weiwei Guo, Xue Yang, Ning Liao, Shaofeng Zhang, Yi Yu, Wenxian Yu, Junchi Yan","doi":"10.1007/s11263-026-02733-2","DOIUrl":"https://doi.org/10.1007/s11263-026-02733-2","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"693 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Light-VQA+: A Video Quality Assessment Model for Exposure Correction with Vision-Language Guidance Light-VQA+:基于视觉语言引导的曝光校正视频质量评估模型
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-025-02706-x
Xunchu Zhou, Xiaohong Liu, Yunlong Dong, Tengchuan Kou, Yixuan Gao, Zicheng Zhang, Chunyi Li, Haoning Wu, Guangtao Zhai
{"title":"Light-VQA+: A Video Quality Assessment Model for Exposure Correction with Vision-Language Guidance","authors":"Xunchu Zhou, Xiaohong Liu, Yunlong Dong, Tengchuan Kou, Yixuan Gao, Zicheng Zhang, Chunyi Li, Haoning Wu, Guangtao Zhai","doi":"10.1007/s11263-025-02706-x","DOIUrl":"https://doi.org/10.1007/s11263-025-02706-x","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"32 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Hyperspectral Image Super-Resolution via Self-Supervised Modality Decoupling 基于自监督模态解耦的无监督高光谱图像超分辨率
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-026-02757-8
Songcheng Du, Yang Zou, Zixu Wang, Xingyuan Li, Ying Li, Changjing Shang, Qiang Shen
{"title":"Unsupervised Hyperspectral Image Super-Resolution via Self-Supervised Modality Decoupling","authors":"Songcheng Du, Yang Zou, Zixu Wang, Xingyuan Li, Ying Li, Changjing Shang, Qiang Shen","doi":"10.1007/s11263-026-02757-8","DOIUrl":"https://doi.org/10.1007/s11263-026-02757-8","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"49 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiffPano++: Scalable and Consistent Multi-View Panorama Generation with Spherical Epipolar-Aware Diffusion diffpano++:可扩展的和一致的多视图全景生成球面极感知扩散
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-026-02758-7
Chenhao Ji, Weicai Ye, Zheng Chen, Junyao Gao, Xiaoshui Huang, Xuekuan Wang, Guofeng Zhang, Songhai Zhang, Tong He, Wanli Ouyang, Cairong Zhao
{"title":"DiffPano++: Scalable and Consistent Multi-View Panorama Generation with Spherical Epipolar-Aware Diffusion","authors":"Chenhao Ji, Weicai Ye, Zheng Chen, Junyao Gao, Xiaoshui Huang, Xuekuan Wang, Guofeng Zhang, Songhai Zhang, Tong He, Wanli Ouyang, Cairong Zhao","doi":"10.1007/s11263-026-02758-7","DOIUrl":"https://doi.org/10.1007/s11263-026-02758-7","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"72 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameter-Efficient Fine-Tuning via Meta-Regularizer 通过元正则化器进行参数高效微调
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-025-02693-z
Jinyoung Park, Juyeon Ko, Sanghyeok Lee, Joonmyung Choi, Hyunwoo J. Kim
Pre-trained vision-language models ( e.g ., CLIP) have shown impressive success in various computer vision tasks with their generalization capability. Recently, parameter-efficient fine-tuning (PEFT) approaches have been actively explored to effectively and efficiently adapt the pre-trained vision-language models to a variety of downstream tasks. However, most existing PEFT approaches suffer from a task overfitting issue since the general knowledge of the pre-trained models is forgotten while a small number of learnable parameters in soft prompts/adapters are fine-tuned on a small data set from a specific target task. Thus, we propose a P arameter- E fficient F ine- T uning via Meta - R egularization (PEFT-MetaR) to improve the generalizability of parameter-efficient fine-tuning methods for vision-language models. Specifically, PEFT-MetaR meta-learns both the regularizer and learnable parameters to harness the task-specific knowledge from the downstream tasks and task-agnostic general knowledge from the pretrained models. Further, PEFT-MetaR augments the task to generate multiple virtual tasks to alleviate the meta-overfitting. In addition, we provide the analysis to comprehend how PEFT-MetaR improves the generalizability from the perspective of the gradient alignment. Our experiments demonstrate that PEFT-MetaR improves the generalizability of parameter-efficient fine-tuning methods on various datasets.
预训练的视觉语言模型(例如;(如CLIP)的泛化能力在各种计算机视觉任务中取得了令人印象深刻的成功。近年来,参数有效微调(PEFT)方法被积极探索,以有效和高效地使预训练的视觉语言模型适应各种下游任务。然而,大多数现有的PEFT方法都存在任务过拟合问题,因为预训练模型的一般知识被遗忘了,而软提示/适配器中的少量可学习参数是在特定目标任务的小数据集上进行微调的。因此,我们提出了一种通过元R正则化(PEFT-MetaR)的P参数有效的F - T微调方法,以提高视觉语言模型的参数有效微调方法的泛化性。具体来说,PEFT-MetaR元学习正则化器和可学习参数,以利用来自下游任务的特定任务知识和来自预训练模型的与任务无关的一般知识。此外,PEFT-MetaR增强了任务生成多个虚拟任务,以减轻元过拟合。此外,我们还从梯度对准的角度分析了PEFT-MetaR如何提高泛化性。我们的实验表明,PEFT-MetaR提高了参数有效微调方法在各种数据集上的泛化性。
{"title":"Parameter-Efficient Fine-Tuning via Meta-Regularizer","authors":"Jinyoung Park, Juyeon Ko, Sanghyeok Lee, Joonmyung Choi, Hyunwoo J. Kim","doi":"10.1007/s11263-025-02693-z","DOIUrl":"https://doi.org/10.1007/s11263-025-02693-z","url":null,"abstract":"Pre-trained vision-language models ( <jats:italic>e.g</jats:italic> ., CLIP) have shown impressive success in various computer vision tasks with their generalization capability. Recently, parameter-efficient fine-tuning (PEFT) approaches have been actively explored to effectively and efficiently adapt the pre-trained vision-language models to a variety of downstream tasks. However, most existing PEFT approaches suffer from a task overfitting issue since the general knowledge of the pre-trained models is forgotten while a small number of learnable parameters in soft prompts/adapters are fine-tuned on a small data set from a specific target task. Thus, we propose a P arameter- E fficient F ine- T uning via Meta - R egularization (PEFT-MetaR) to improve the generalizability of parameter-efficient fine-tuning methods for vision-language models. Specifically, PEFT-MetaR meta-learns both the regularizer and learnable parameters to harness the task-specific knowledge from the downstream tasks and task-agnostic general knowledge from the pretrained models. Further, PEFT-MetaR augments the task to generate multiple virtual tasks to alleviate the meta-overfitting. In addition, we provide the analysis to comprehend how PEFT-MetaR improves the generalizability from the perspective of the gradient alignment. Our experiments demonstrate that PEFT-MetaR improves the generalizability of parameter-efficient fine-tuning methods on various datasets.","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"15 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
T2VShield: Model-Agnostic Jailbreak Defense for Text-to-Video Models T2VShield:文本到视频模型的模型不可知越狱防御
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-025-02724-9
Siyuan Liang, Jiayang Liu, Jiecheng Zhai, Tianmeng Fang, Rongcheng Tu, Aishan Liu, Xiaochun Cao, Dacheng Tao
{"title":"T2VShield: Model-Agnostic Jailbreak Defense for Text-to-Video Models","authors":"Siyuan Liang, Jiayang Liu, Jiecheng Zhai, Tianmeng Fang, Rongcheng Tu, Aishan Liu, Xiaochun Cao, Dacheng Tao","doi":"10.1007/s11263-025-02724-9","DOIUrl":"https://doi.org/10.1007/s11263-025-02724-9","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"1 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMUFormer: Efficient Multi-task Uncertainties for Reliable Joint Semantic Segmentation and Monocular Depth Estimation EMUFormer:高效的多任务不确定性联合语义分割和单目深度估计
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-026-02751-0
Steven Landgraf, Markus Hillemann, Theodor Kapler, Markus Ulrich
Quantifying the predictive uncertainty emerged as a possible solution to common challenges like overconfidence, lack of explainability, and robustness of deep neural networks, albeit one that is often computationally expensive. Many real-world applications are multi-modal in nature and hence benefit from multi-task learning. In autonomous driving or robotics, for example, the joint solution of semantic segmentation and monocular depth estimation has proven to be valuable. To this end, we introduce EMUFormer, a novel student-teacher distillation approach for efficient multi-task uncertainties in the context of joint semantic segmentation and monocular depth estimation. By leveraging the predictive uncertainties of the teacher, EMUFormer achieves new state-of-the-art results on Cityscapes and NYUv2 and additionally estimates reliable predictive uncertainties for both tasks that are comparable or superior to a Deep Ensemble despite being an order of magnitude more efficient to compute. These findings even extend to out-of-domain and domain adaptation scenarios, highlighting EMUFormer’s remarkable reliability.
量化预测的不确定性成为解决过度自信、缺乏可解释性和深度神经网络鲁棒性等常见挑战的一种可能的解决方案,尽管这种方法通常在计算上很昂贵。许多现实世界的应用程序本质上是多模态的,因此受益于多任务学习。例如,在自动驾驶或机器人中,语义分割和单目深度估计的联合解决方案已被证明是有价值的。为此,我们引入了EMUFormer,这是一种在联合语义分割和单目深度估计的背景下高效处理多任务不确定性的新型学生-教师蒸馏方法。通过利用教师的预测不确定性,EMUFormer在cityscape和NYUv2上实现了最新的最先进的结果,并为这两个任务额外估计了可靠的预测不确定性,这些不确定性与Deep Ensemble相当或优于后者,尽管计算效率要高一个数量级。这些发现甚至延伸到域外和域适应场景,突出了EMUFormer卓越的可靠性。
{"title":"EMUFormer: Efficient Multi-task Uncertainties for Reliable Joint Semantic Segmentation and Monocular Depth Estimation","authors":"Steven Landgraf, Markus Hillemann, Theodor Kapler, Markus Ulrich","doi":"10.1007/s11263-026-02751-0","DOIUrl":"https://doi.org/10.1007/s11263-026-02751-0","url":null,"abstract":"Quantifying the predictive uncertainty emerged as a possible solution to common challenges like overconfidence, lack of explainability, and robustness of deep neural networks, albeit one that is often computationally expensive. Many real-world applications are multi-modal in nature and hence benefit from multi-task learning. In autonomous driving or robotics, for example, the joint solution of semantic segmentation and monocular depth estimation has proven to be valuable. To this end, we introduce EMUFormer, a novel student-teacher distillation approach for efficient multi-task uncertainties in the context of joint semantic segmentation and monocular depth estimation. By leveraging the predictive uncertainties of the teacher, EMUFormer achieves new state-of-the-art results on Cityscapes and NYUv2 and additionally estimates reliable predictive uncertainties for both tasks that are comparable or superior to a Deep Ensemble despite being an order of magnitude more efficient to compute. These findings even extend to out-of-domain and domain adaptation scenarios, highlighting EMUFormer’s remarkable reliability.","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"199 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AEMIM: Adversarial Examples Meet Masked Image Modeling AEMIM:对抗性例子与蒙面图像建模
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-026-02741-2
Wenzhao Xiang, Chang Liu, Hang Su, Hongyang Yu
{"title":"AEMIM: Adversarial Examples Meet Masked Image Modeling","authors":"Wenzhao Xiang, Chang Liu, Hang Su, Hongyang Yu","doi":"10.1007/s11263-026-02741-2","DOIUrl":"https://doi.org/10.1007/s11263-026-02741-2","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"49 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Interactive Conversational 3D Virtual Human 交互式会话3D虚拟人
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-025-02725-8
Richard Shaw, Youngkyoon Jang, Athanasios Papaioannou, Arthur Moreau, Helisa Dhamo, Zhensong Zhang, Eduardo Pérez-Pellitero
{"title":"An Interactive Conversational 3D Virtual Human","authors":"Richard Shaw, Youngkyoon Jang, Athanasios Papaioannou, Arthur Moreau, Helisa Dhamo, Zhensong Zhang, Eduardo Pérez-Pellitero","doi":"10.1007/s11263-025-02725-8","DOIUrl":"https://doi.org/10.1007/s11263-025-02725-8","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"101 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
You Only Look Intensity Once: Event-Driven Long-Term High-Speed Object Detection 你只看强度一次:事件驱动的长期高速目标检测
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-06 DOI: 10.1007/s11263-026-02749-8
Wen Dong, Haiyang Mei, Yinglian Ji, Yutong Jiang, Ziqi Wei, Shengfeng He, Xin Yang
{"title":"You Only Look Intensity Once: Event-Driven Long-Term High-Speed Object Detection","authors":"Wen Dong, Haiyang Mei, Yinglian Ji, Yutong Jiang, Ziqi Wei, Shengfeng He, Xin Yang","doi":"10.1007/s11263-026-02749-8","DOIUrl":"https://doi.org/10.1007/s11263-026-02749-8","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"1 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147368090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1