首页 > 最新文献

International Journal of Computer Vision最新文献

英文 中文
Breaking Redundancy via 3D Sparse Geometry: 3D-aware Neural Compression for Multi-View Videos 通过3D稀疏几何打破冗余:多视图视频的3D感知神经压缩
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s11263-025-02604-2
Shiwei Wang, Liquan Shen, Jimin Xiao, Zhaoyi Tian, Feifeng Wang, Xiangyu Hu, Yao Zhu, Guorui Feng
{"title":"Breaking Redundancy via 3D Sparse Geometry: 3D-aware Neural Compression for Multi-View Videos","authors":"Shiwei Wang, Liquan Shen, Jimin Xiao, Zhaoyi Tian, Feifeng Wang, Xiangyu Hu, Yao Zhu, Guorui Feng","doi":"10.1007/s11263-025-02604-2","DOIUrl":"https://doi.org/10.1007/s11263-025-02604-2","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"170 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145947211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Granularity Prediction with Learnable Fusion for Scene Text Recognition 基于可学习融合的场景文本识别多粒度预测
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s11263-025-02653-7
Cheng Da, Peng Wang, Cong Yao
{"title":"Multi-Granularity Prediction with Learnable Fusion for Scene Text Recognition","authors":"Cheng Da, Peng Wang, Cong Yao","doi":"10.1007/s11263-025-02653-7","DOIUrl":"https://doi.org/10.1007/s11263-025-02653-7","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"82 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145947212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evidence Conflict Sampling for Open-set Active Learning 开放集主动学习的证据冲突抽样
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s11263-025-02600-6
Kun-Peng Ning, Hai-Jian Ke, Jia-Yu Yao, Yu-Yang Liu, Yong-Hong Tian, Li Yuan
{"title":"Evidence Conflict Sampling for Open-set Active Learning","authors":"Kun-Peng Ning, Hai-Jian Ke, Jia-Yu Yao, Yu-Yang Liu, Yong-Hong Tian, Li Yuan","doi":"10.1007/s11263-025-02600-6","DOIUrl":"https://doi.org/10.1007/s11263-025-02600-6","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"46 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145947214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds foleycraft:将无声视频与栩栩如生的同步声音结合起来
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s11263-025-02649-3
Yiming Zhang, Yicheng Gu, Yanhong Zeng, Zhening Xing, Yuancheng Wang, Zhizheng Wu, Bin Liu, Kai Chen
{"title":"FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds","authors":"Yiming Zhang, Yicheng Gu, Yanhong Zeng, Zhening Xing, Yuancheng Wang, Zhizheng Wu, Bin Liu, Kai Chen","doi":"10.1007/s11263-025-02649-3","DOIUrl":"https://doi.org/10.1007/s11263-025-02649-3","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"253 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145947213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAS-AIR-3D: A Large-scale Low-quality Multi-modal Face Database CAS-AIR-3D:大规模低质量多模态人脸数据库
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s11263-025-02674-2
Qi Li, Xiaoxiao Dong, Weining Wang, Zhenan Sun, Tieniu Tan, Caifeng Shan
{"title":"CAS-AIR-3D: A Large-scale Low-quality Multi-modal Face Database","authors":"Qi Li, Xiaoxiao Dong, Weining Wang, Zhenan Sun, Tieniu Tan, Caifeng Shan","doi":"10.1007/s11263-025-02674-2","DOIUrl":"https://doi.org/10.1007/s11263-025-02674-2","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"18 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145947257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Traditional Approach for Color Constancy and Color Assimilation Illusions with Its Applications to Low-Light Image Enhancement 一种传统的色恒和色同化错觉方法及其在微光图像增强中的应用
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-06 DOI: 10.1007/s11263-025-02595-0
Oguzhan Ulucan, Diclehan Ulucan, Marc Ebner
The human visual system achieves color constancy, allowing consistent color perception under varying environmental contexts, while also being deceived by color illusions, where contextual information affects our perception. Despite the close relationship between color constancy and color illusions, and their potential benefits to the field, both phenomena are rarely studied together in computer vision. In this study, we present the benefits of considering color illusions in the field of computer vision. Particularly, we introduce a learning-free method, namely multiresolution color constancy , which combines insights from computational neuroscience and computer vision to address both phenomena within a single framework. Our approach performs color constancy in both multi- and single-illuminant scenarios, while it is also deceived by assimilation illusions. Additionally, we extend our method to low-light image enhancement, thus, demonstrate its usability across different computer vision tasks. Through comprehensive experiments on color constancy, we show the effectiveness of our method in multi-illuminant and single-illuminant scenarios. Furthermore, we compare our method with state-of-the-art learning-based models on low-light image enhancement, where it shows competitive performance. This work presents the first method that integrates color constancy, color illusions, and low-light image enhancement in a single and explainable framework.
人类的视觉系统实现了颜色的恒常性,允许在不同的环境背景下保持一致的颜色感知,同时也被颜色错觉所欺骗,其中环境信息影响我们的感知。尽管色彩恒常性和色彩错觉之间有着密切的关系,以及它们对该领域的潜在益处,但这两种现象在计算机视觉中很少被一起研究。在本研究中,我们展示了在计算机视觉领域考虑色彩错觉的好处。特别是,我们引入了一种无需学习的方法,即多分辨率颜色恒常性,它结合了计算神经科学和计算机视觉的见解,在单一框架内解决这两种现象。我们的方法在多光源和单光源情况下都能执行颜色恒定,同时它也被同化错觉所欺骗。此外,我们将该方法扩展到低光图像增强,从而证明其在不同计算机视觉任务中的可用性。通过对颜色稳定性的综合实验,我们证明了该方法在多光源和单光源场景下的有效性。此外,我们将我们的方法与最先进的基于学习的低光图像增强模型进行了比较,其中它显示出具有竞争力的性能。这项工作提出了在一个单一的和可解释的框架中集成颜色恒常性,颜色错觉和低光图像增强的第一种方法。
{"title":"A Traditional Approach for Color Constancy and Color Assimilation Illusions with Its Applications to Low-Light Image Enhancement","authors":"Oguzhan Ulucan, Diclehan Ulucan, Marc Ebner","doi":"10.1007/s11263-025-02595-0","DOIUrl":"https://doi.org/10.1007/s11263-025-02595-0","url":null,"abstract":"The human visual system achieves color constancy, allowing consistent color perception under varying environmental contexts, while also being deceived by color illusions, where contextual information affects our perception. Despite the close relationship between color constancy and color illusions, and their potential benefits to the field, both phenomena are rarely studied together in computer vision. In this study, we present the benefits of considering color illusions in the field of computer vision. Particularly, we introduce a learning-free method, namely <jats:italic>multiresolution color constancy</jats:italic> , which combines insights from computational neuroscience and computer vision to address both phenomena within a single framework. Our approach performs color constancy in both multi- and single-illuminant scenarios, while it is also deceived by assimilation illusions. Additionally, we extend our method to low-light image enhancement, thus, demonstrate its usability across different computer vision tasks. Through comprehensive experiments on color constancy, we show the effectiveness of our method in multi-illuminant and single-illuminant scenarios. Furthermore, we compare our method with state-of-the-art learning-based models on low-light image enhancement, where it shows competitive performance. This work presents the first method that integrates color constancy, color illusions, and low-light image enhancement in a single and explainable framework.","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"83 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hallucination Early Detection in Diffusion Models 扩散模型中的幻觉早期检测
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-06 DOI: 10.1007/s11263-025-02622-0
Federico Betti, Lorenzo Baraldi, Lorenzo Baraldi, Rita Cucchiara, Nicu Sebe
{"title":"Hallucination Early Detection in Diffusion Models","authors":"Federico Betti, Lorenzo Baraldi, Lorenzo Baraldi, Rita Cucchiara, Nicu Sebe","doi":"10.1007/s11263-025-02622-0","DOIUrl":"https://doi.org/10.1007/s11263-025-02622-0","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"42 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-from-motion in micro-image domain for uncalibrated plenoptic 2.0 cameras 未标定全光学2.0相机微像域的运动结构
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-06 DOI: 10.1007/s11263-025-02612-2
Sarah Dury, Daniele Bonatto, Jaime Sancho, Eduardo Juarez, Mehrdad Teratani, Gauthier Lafruit
{"title":"Structure-from-motion in micro-image domain for uncalibrated plenoptic 2.0 cameras","authors":"Sarah Dury, Daniele Bonatto, Jaime Sancho, Eduardo Juarez, Mehrdad Teratani, Gauthier Lafruit","doi":"10.1007/s11263-025-02612-2","DOIUrl":"https://doi.org/10.1007/s11263-025-02612-2","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"38 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Hybrid Gabor Deep Learning Approach and its Application to Medical Image Classification 一种轻量级混合Gabor深度学习方法及其在医学图像分类中的应用
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-05 DOI: 10.1007/s11263-025-02658-2
Rayyan Ahmed, Hamza Baali, Abdesselam Bouzerdoum
Deep learning has revolutionized image analysis, but its applications are limited by the need for large datasets and high computational resources. Hybrid approaches that combine domain-specific, universal feature extractor with learnable neural networks offer a promising balance of efficiency and accuracy. This paper presents a hybrid model integrating a Gabor filter bank front-end with compact neural networks for efficient feature extraction and classification. Gabor filters, inherently bandpass, extract early-stage features with spatially shifted filters covering the frequency plane to balance spatial and spectral localization. We introduce separate channels capturing low- and high-frequency components to enhance feature representation while maintaining efficiency. The approach reduces trainable parameters and training time while preserving accuracy, making it suitable for resource-constrained environments. Compared to MobileNetV2 and EfficientNetB0, our model trains approximately 4–6 × faster on average while using fewer parameters and FLOPs. We compare it to pretrained networks used as feature extractors, lightweight fine-tuned models, and classical descriptors (HOG, LBP). It achieves competitive results with faster training and reduced computation. The hybrid model uses only around 0.60 GFLOPs and 0.34 M parameters, and we apply statistical significance testing (ANOVA, paired t-tests) to validate performance gains. Inference takes 0.01–0.02 s per image, up to 15 × faster than EfficientNetB0 and 8 × faster than MobileNetV2. Grad-CAM visualizations confirm localized attention on relevant regions. This work highlights integrating traditional features with deep learning to improve efficiency for resource-limited applications. Future work will address color fusion, robustness to noise, and automated filter optimization.
深度学习已经彻底改变了图像分析,但其应用受到对大型数据集和高计算资源的需求的限制。将特定领域的通用特征提取器与可学习神经网络相结合的混合方法提供了效率和准确性的良好平衡。本文提出了一种将Gabor滤波器组前端与紧凑神经网络相结合的混合模型,用于高效的特征提取和分类。Gabor滤波器本身是带通的,它通过覆盖频率平面的空间移位滤波器来提取早期特征,以平衡空间和频谱定位。我们引入了捕获低频和高频分量的单独通道,以增强特征表示,同时保持效率。该方法在保持准确性的同时减少了可训练参数和训练时间,使其适用于资源受限的环境。与MobileNetV2和EfficientNetB0相比,我们的模型在使用更少的参数和FLOPs的情况下,平均训练速度约为4-6倍。我们将其与用作特征提取器、轻量级微调模型和经典描述符(HOG、LBP)的预训练网络进行比较。它以更快的训练速度和更少的计算量获得了有竞争力的结果。混合模型仅使用约0.60 GFLOPs和0.34 M参数,我们应用统计显著性检验(ANOVA,配对t检验)来验证性能增益。每张图像的推理时间为0.01-0.02秒,比EfficientNetB0快15倍,比MobileNetV2快8倍。Grad-CAM可视化确认了对相关区域的局部关注。这项工作强调了将传统特征与深度学习相结合,以提高资源有限应用程序的效率。未来的工作将涉及颜色融合、对噪声的鲁棒性和自动滤波器优化。
{"title":"A Lightweight Hybrid Gabor Deep Learning Approach and its Application to Medical Image Classification","authors":"Rayyan Ahmed, Hamza Baali, Abdesselam Bouzerdoum","doi":"10.1007/s11263-025-02658-2","DOIUrl":"https://doi.org/10.1007/s11263-025-02658-2","url":null,"abstract":"Deep learning has revolutionized image analysis, but its applications are limited by the need for large datasets and high computational resources. Hybrid approaches that combine domain-specific, universal feature extractor with learnable neural networks offer a promising balance of efficiency and accuracy. This paper presents a hybrid model integrating a Gabor filter bank front-end with compact neural networks for efficient feature extraction and classification. Gabor filters, inherently bandpass, extract early-stage features with spatially shifted filters covering the frequency plane to balance spatial and spectral localization. We introduce separate channels capturing low- and high-frequency components to enhance feature representation while maintaining efficiency. The approach reduces trainable parameters and training time while preserving accuracy, making it suitable for resource-constrained environments. Compared to MobileNetV2 and EfficientNetB0, our model trains approximately 4–6 × faster on average while using fewer parameters and FLOPs. We compare it to pretrained networks used as feature extractors, lightweight fine-tuned models, and classical descriptors (HOG, LBP). It achieves competitive results with faster training and reduced computation. The hybrid model uses only around 0.60 GFLOPs and 0.34 M parameters, and we apply statistical significance testing (ANOVA, paired t-tests) to validate performance gains. Inference takes 0.01–0.02 s per image, up to 15 × faster than EfficientNetB0 and 8 × faster than MobileNetV2. Grad-CAM visualizations confirm localized attention on relevant regions. This work highlights integrating traditional features with deep learning to improve efficiency for resource-limited applications. Future work will address color fusion, robustness to noise, and automated filter optimization.","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"41 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning from History: Task-agnostic Model Contrastive Learning for Image Restoration 从历史中学习:图像恢复的任务不可知论模型对比学习
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-05 DOI: 10.1007/s11263-025-02669-z
Gang Wu, Junjun Jiang, Kui Jiang, Xianming Liu, Wangmeng Zuo
{"title":"Learning from History: Task-agnostic Model Contrastive Learning for Image Restoration","authors":"Gang Wu, Junjun Jiang, Kui Jiang, Xianming Liu, Wangmeng Zuo","doi":"10.1007/s11263-025-02669-z","DOIUrl":"https://doi.org/10.1007/s11263-025-02669-z","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"1 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145902472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1