首页 > 最新文献

International Journal of Computer Vision最新文献

英文 中文
Sample-efficient Audio-Visual Learning of Scene Acoustics 场景声学的采样高效视听学习
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-11 DOI: 10.1007/s11263-026-02767-6
Arjun Somayazulu, Sagnik Majumder, Changan Chen, Ziad Al-Halah, Kristen Grauman
An environment acoustic model represents how sound is transformed by the physical characteristics of an indoor environment, for any given source/receiver location. Whereas traditional methods for constructing such models assume dense geometry and/or sound measurements throughout the environment, we explore how to infer room impulse responses (RIRs) based on a sparse set of images and echoes observed in the space, as well as how to choose where to collect these audio-visual observations. Towards that goal, we first introduce a transformer-based method that uses self-attention to build a rich acoustic context, then infers the RIRs of arbitrary query source-receiver locations through cross-attention. Then, motivated by real-world physical constraints in collecting these observations, we further introduce active acoustic sampling , a new task in which a mobile agent jointly constructs the environment acoustic model and spatial occupancy map on-the-fly from sparse audio-visual observations. We train a reinforcement learning (RL) policy that guides agent navigation toward optimal acoustic data sampling positions, rewarding information gain for the full environment model. Evaluating on diverse unseen 3D indoor environments, our method outperforms the state-of-the-art and—in a major departure from traditional methods—generalizes to novel environments in a few-shot manner. Furthermore, when augmented with our active sampling policy, it successfully guides an embodied agent to acoustically informative positions given real-world exploration constraints, outperforming both traditional navigation agents and prior acoustic rendering methods. Project: http://vision.cs.utexas.edu/projects/fewShot-RIR .
环境声学模型表示对于任何给定的源/接收器位置,声音是如何通过室内环境的物理特性进行转换的。尽管构建此类模型的传统方法假设在整个环境中进行密集的几何和/或声音测量,但我们探索了如何基于在空间中观察到的稀疏图像和回声集推断房间脉冲响应(RIRs),以及如何选择收集这些视听观察结果的位置。为了实现这一目标,我们首先引入了一种基于变压器的方法,该方法利用自关注来构建丰富的声学环境,然后通过交叉关注来推断任意查询源接收器位置的rir。然后,考虑到现实世界中收集这些观测数据的物理约束,我们进一步引入了主动声学采样,这是一种新的任务,移动智能体通过稀疏的视听观测数据实时构建环境声学模型和空间占用图。我们训练了一个强化学习(RL)策略,该策略引导智能体导航到最佳声学数据采样位置,奖励整个环境模型的信息增益。在各种看不见的3D室内环境中进行评估,我们的方法优于最先进的技术,并且与传统方法有很大的不同,它可以以少量镜头的方式推广到新的环境中。此外,当与我们的主动采样策略相结合时,它成功地引导一个具体化的代理到给定现实世界勘探约束的声学信息位置,优于传统的导航代理和先前的声学渲染方法。项目:http://vision.cs.utexas.edu/projects/fewShot-RIR。
{"title":"Sample-efficient Audio-Visual Learning of Scene Acoustics","authors":"Arjun Somayazulu, Sagnik Majumder, Changan Chen, Ziad Al-Halah, Kristen Grauman","doi":"10.1007/s11263-026-02767-6","DOIUrl":"https://doi.org/10.1007/s11263-026-02767-6","url":null,"abstract":"An <jats:italic>environment acoustic model</jats:italic> represents how sound is transformed by the physical characteristics of an indoor environment, for any given source/receiver location. Whereas traditional methods for constructing such models assume dense geometry and/or sound measurements throughout the environment, we explore how to infer room impulse responses (RIRs) based on a sparse set of images and echoes observed in the space, as well as how to choose where to collect these audio-visual observations. Towards that goal, we first introduce a transformer-based method that uses self-attention to build a rich acoustic context, then infers the RIRs of arbitrary query source-receiver locations through cross-attention. Then, motivated by real-world physical constraints in collecting these observations, we further introduce <jats:italic>active acoustic sampling</jats:italic> , a new task in which a mobile agent jointly constructs the environment acoustic model and spatial occupancy map on-the-fly from sparse audio-visual observations. We train a reinforcement learning (RL) policy that guides agent navigation toward optimal acoustic data sampling positions, rewarding information gain for the full environment model. Evaluating on diverse unseen 3D indoor environments, our method outperforms the state-of-the-art and—in a major departure from traditional methods—generalizes to novel environments in a few-shot manner. Furthermore, when augmented with our active sampling policy, it successfully guides an embodied agent to acoustically informative positions given real-world exploration constraints, outperforming both traditional navigation agents and prior acoustic rendering methods. Project: <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"http://vision.cs.utexas.edu/projects/fewShot-RIR\" ext-link-type=\"uri\">http://vision.cs.utexas.edu/projects/fewShot-RIR</jats:ext-link> .","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"36 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147462021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-shot Class-Incremental Learning via Generative Co-Memory Regularization 基于生成共记忆正则化的少射类增量学习
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-07 DOI: 10.1007/s11263-026-02746-x
Kexin Bao, Yong Li, Dan Zeng, Shiming Ge
{"title":"Few-shot Class-Incremental Learning via Generative Co-Memory Regularization","authors":"Kexin Bao, Yong Li, Dan Zeng, Shiming Ge","doi":"10.1007/s11263-026-02746-x","DOIUrl":"https://doi.org/10.1007/s11263-026-02746-x","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"57 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147374225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph in Graph Neural Network 图中的图神经网络
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-07 DOI: 10.1007/s11263-026-02731-4
Jiongshu Wang, Jing Yang, Jiankang Deng, Hatice Gunes, Siyang Song
Existing Graph Neural Networks (GNNs) are limited to process graphs each of whose vertices is represented by a vector or a single value, limited their representing capability to describe complex objects. In this paper, we propose a novel GNN (called Graph in Graph Neural (GIG) Network) which can process graph-style data (called GIG sample) whose vertices are further represented by graphs. Given a set of graphs or a data sample whose components can be represented by a set of graphs (called multi-graph data sample), our GIG network starts with a GIG sample generation (GSG) module which encodes the input as a GIG sample , where each GIG vertex includes a graph. Then, a set of GIG hidden layers are stacked, with each consisting of: (1) a GIG vertex-level updating (GVU) module that individually updates the graph in every GIG vertex based on its internal information; and (2) a global-level GIG sample updating (GGU) module that updates graphs in all GIG vertices based on their relationships, making the updated GIG vertices become global context-aware. This way, both internal cues within the graph contained in each GIG vertex and the relationships among GIG vertices could be utilized for down-stream tasks. Experimental results demonstrate that our GIG network generalizes well for not only various generic graph analysis tasks but also real-world multi-graph data analysis (e.g., human skeleton video-based action recognition), which achieved the new state-of-the-art results on 15 out of 16 evaluated datasets. Our code is publicly available at https://github.com/wangjs96/Graph-in-Graph-Neural-Network .
现有的图神经网络(gnn)仅限于处理每个顶点由向量或单个值表示的图,限制了它们描述复杂对象的表示能力。在本文中,我们提出了一种新的GNN(称为Graph In Graph Neural (GIG) Network),它可以处理图形样式的数据(称为GIG样本),这些数据的顶点进一步用图形表示。给定一组图或数据样本,其组件可以由一组图表示(称为多图数据样本),我们的GIG网络从一个GIG样本生成(GSG)模块开始,该模块将输入编码为GIG样本,其中每个GIG顶点包含一个图。然后,堆叠一组GIG隐藏层,每个隐藏层由:(1)GIG顶点级更新(GVU)模块组成,该模块根据每个GIG顶点的内部信息单独更新图;(2)全局级GIG样本更新(GGU)模块,根据所有GIG顶点的关系更新图形,使更新后的GIG顶点具有全局上下文感知能力。这样,每个GIG顶点中包含的图中的内部线索和GIG顶点之间的关系都可以用于下游任务。实验结果表明,我们的GIG网络不仅可以很好地泛化各种通用图分析任务,还可以很好地泛化现实世界的多图数据分析(例如,基于人体骨骼视频的动作识别),在16个评估数据集中的15个上取得了最新的结果。我们的代码可以在https://github.com/wangjs96/Graph-in-Graph-Neural-Network上公开获得。
{"title":"Graph in Graph Neural Network","authors":"Jiongshu Wang, Jing Yang, Jiankang Deng, Hatice Gunes, Siyang Song","doi":"10.1007/s11263-026-02731-4","DOIUrl":"https://doi.org/10.1007/s11263-026-02731-4","url":null,"abstract":"Existing Graph Neural Networks (GNNs) are limited to process graphs each of whose vertices is represented by a vector or a single value, limited their representing capability to describe complex objects. In this paper, we propose a novel GNN (called Graph in Graph Neural (GIG) Network) which can process graph-style data (called GIG sample) whose vertices are further represented by graphs. Given a set of graphs or a data sample whose components can be represented by a set of graphs (called multi-graph data sample), our GIG network starts with a GIG sample generation (GSG) module which encodes the input as a GIG sample , where each GIG vertex includes a graph. Then, a set of GIG hidden layers are stacked, with each consisting of: (1) a GIG vertex-level updating (GVU) module that individually updates the graph in every GIG vertex based on its internal information; and (2) a global-level GIG sample updating (GGU) module that updates graphs in all GIG vertices based on their relationships, making the updated GIG vertices become global context-aware. This way, both internal cues within the graph contained in each GIG vertex and the relationships among GIG vertices could be utilized for down-stream tasks. Experimental results demonstrate that our GIG network generalizes well for not only various generic graph analysis tasks but also real-world multi-graph data analysis (e.g., human skeleton video-based action recognition), which achieved the new state-of-the-art results on 15 out of 16 evaluated datasets. Our code is publicly available at <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"https://github.com/wangjs96/Graph-in-Graph-Neural-Network\" ext-link-type=\"uri\">https://github.com/wangjs96/Graph-in-Graph-Neural-Network</jats:ext-link> .","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"15 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147374227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unifying Viewgraph Sparsification and Disambiguation of Repeated Structures in Structure-from-Motion 运动构造中重复结构的统一视图稀疏化和消歧
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-07 DOI: 10.1007/s11263-025-02681-3
Lalit Manam, Venu Madhav Govindu
{"title":"Unifying Viewgraph Sparsification and Disambiguation of Repeated Structures in Structure-from-Motion","authors":"Lalit Manam, Venu Madhav Govindu","doi":"10.1007/s11263-025-02681-3","DOIUrl":"https://doi.org/10.1007/s11263-025-02681-3","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"406 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147374229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FreeTraj: Tuning-Free Trajectory Control via Noise Guided Video Diffusion FreeTraj:通过噪声引导视频扩散的无调谐轨迹控制
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-07 DOI: 10.1007/s11263-026-02732-3
Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, Ziwei Liu
{"title":"FreeTraj: Tuning-Free Trajectory Control via Noise Guided Video Diffusion","authors":"Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, Ziwei Liu","doi":"10.1007/s11263-026-02732-3","DOIUrl":"https://doi.org/10.1007/s11263-026-02732-3","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"9 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147374222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video Shadow Detection with Intra-and Inter-video Cooperation 基于视频内与视频间合作的视频阴影检测
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-07 DOI: 10.1007/s11263-025-02715-w
Liang Wan, Zhihao Chen, Junting Zhao, Lei Zhu, Huazhu Fu, Wei Feng
{"title":"Video Shadow Detection with Intra-and Inter-video Cooperation","authors":"Liang Wan, Zhihao Chen, Junting Zhao, Lei Zhu, Huazhu Fu, Wei Feng","doi":"10.1007/s11263-025-02715-w","DOIUrl":"https://doi.org/10.1007/s11263-025-02715-w","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"188 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147374223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CoP: Chain of Perception for Referring 3D Instance Segmentation 参考三维实例分割的感知链
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-07 DOI: 10.1007/s11263-025-02699-7
Yiwei Ma, Jiayi Ji, Zhipeng Qian, Xiaoshuai Sun, Rongrong Ji
{"title":"CoP: Chain of Perception for Referring 3D Instance Segmentation","authors":"Yiwei Ma, Jiayi Ji, Zhipeng Qian, Xiaoshuai Sun, Rongrong Ji","doi":"10.1007/s11263-025-02699-7","DOIUrl":"https://doi.org/10.1007/s11263-025-02699-7","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"109 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147374221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TARGO and TARGO-Net: Benchmarking Target-Driven Object Grasping Under Occlusions TARGO和TARGO- net:遮挡下目标驱动物体抓取的基准测试
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-07 DOI: 10.1007/s11263-025-02716-9
Yan Xia, Ran Ding, Ziyuan Qin, Guanqi Zhan, Kaichen Zhou, Long Yang, Hao Dong, Daniel Cremers
{"title":"TARGO and TARGO-Net: Benchmarking Target-Driven Object Grasping Under Occlusions","authors":"Yan Xia, Ran Ding, Ziyuan Qin, Guanqi Zhan, Kaichen Zhou, Long Yang, Hao Dong, Daniel Cremers","doi":"10.1007/s11263-025-02716-9","DOIUrl":"https://doi.org/10.1007/s11263-025-02716-9","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"39 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147374224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Point Cloud Registration: A Comprehensive Survey and Taxonomy 基于深度学习的点云配准:综合调查与分类
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-07 DOI: 10.1007/s11263-025-02723-w
Yu-Xin Zhang, Jie Gui, Baosheng Yu, Xiaofeng Cong, Xin Gong, Wenbing Tao, Dacheng Tao
{"title":"Deep Learning-Based Point Cloud Registration: A Comprehensive Survey and Taxonomy","authors":"Yu-Xin Zhang, Jie Gui, Baosheng Yu, Xiaofeng Cong, Xin Gong, Wenbing Tao, Dacheng Tao","doi":"10.1007/s11263-025-02723-w","DOIUrl":"https://doi.org/10.1007/s11263-025-02723-w","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"263 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147374228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Invert Your Prompt: Editing-Aware Diffusion Inversion 反转你的提示:编辑意识扩散反转
IF 19.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-07 DOI: 10.1007/s11263-025-02691-1
Yangyang Xu, Wenqi Shao, Yong Du, Haiming Zhu, Yang Zhou, Jiayuan Xie, Ping Luo, Shengfeng He
{"title":"Invert Your Prompt: Editing-Aware Diffusion Inversion","authors":"Yangyang Xu, Wenqi Shao, Yong Du, Haiming Zhu, Yang Zhou, Jiayuan Xie, Ping Luo, Shengfeng He","doi":"10.1007/s11263-025-02691-1","DOIUrl":"https://doi.org/10.1007/s11263-025-02691-1","url":null,"abstract":"","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"16 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147374230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1