首页 > 最新文献

Deep Learning on Graphs最新文献

英文 中文
Advanced Applications in Graph Neural Networks 图神经网络的高级应用
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.021
{"title":"Advanced Applications in Graph Neural Networks","authors":"","doi":"10.1017/9781108924184.021","DOIUrl":"https://doi.org/10.1017/9781108924184.021","url":null,"abstract":"","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128487913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced Topics in Graph Neural Networks 图神经网络高级主题
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.020
{"title":"Advanced Topics in Graph Neural Networks","authors":"","doi":"10.1017/9781108924184.020","DOIUrl":"https://doi.org/10.1017/9781108924184.020","url":null,"abstract":"","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116169299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Neural Networks in Natural Language Processing 自然语言处理中的图神经网络
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.015
Bang Liu, Lingfei Wu
Natural language processing (NLP) and understanding aim to read from unformatted text to accomplish different tasks. While word embeddings learned by deep neural networks are widely used, the underlying linguistic and semantic structures of text pieces cannot be fully exploited in these representations. Graph is a natural way to capture the connections between different text pieces, such as entities, sentences, and documents. To overcome the limits in vector space models, researchers combine deep learning models with graph-structured representations for various tasks in NLP and text mining. Such combinations help to make full use of both the structural information in text and the representation learning ability of deep neural networks. In this chapter, we introduce the various graph representations that are extensively used in NLP, and show how different NLP tasks can be tackled from a graph perspective. We summarize recent research works on graph-based NLP, and discuss two case studies related to graph-based text clustering, matching, and multihop machine reading comprehension in detail. Finally, we provide a synthesis about the important open problems of this subfield.
自然语言处理(NLP)和理解旨在从未格式化的文本中读取以完成不同的任务。虽然深度神经网络学习的词嵌入被广泛使用,但文本片段的底层语言和语义结构不能在这些表示中得到充分利用。图是捕获不同文本片段(如实体、句子和文档)之间联系的一种自然方式。为了克服向量空间模型的局限性,研究人员将深度学习模型与图结构表示结合起来,用于NLP和文本挖掘中的各种任务。这种组合既能充分利用文本中的结构信息,又能充分利用深度神经网络的表示学习能力。在本章中,我们介绍了在NLP中广泛使用的各种图表示,并展示了如何从图的角度处理不同的NLP任务。我们总结了近年来基于图的自然语言处理的研究成果,并详细讨论了两个与基于图的文本聚类、匹配和多跳机器阅读理解相关的案例研究。最后,我们对这一分支领域的重要开放问题进行了综合。
{"title":"Graph Neural Networks in Natural Language Processing","authors":"Bang Liu, Lingfei Wu","doi":"10.1017/9781108924184.015","DOIUrl":"https://doi.org/10.1017/9781108924184.015","url":null,"abstract":"Natural language processing (NLP) and understanding aim to read from unformatted text to accomplish different tasks. While word embeddings learned by deep neural networks are widely used, the underlying linguistic and semantic structures of text pieces cannot be fully exploited in these representations. Graph is a natural way to capture the connections between different text pieces, such as entities, sentences, and documents. To overcome the limits in vector space models, researchers combine deep learning models with graph-structured representations for various tasks in NLP and text mining. Such combinations help to make full use of both the structural information in text and the representation learning ability of deep neural networks. In this chapter, we introduce the various graph representations that are extensively used in NLP, and show how different NLP tasks can be tackled from a graph perspective. We summarize recent research works on graph-based NLP, and discuss two case studies related to graph-based text clustering, matching, and multihop machine reading comprehension in detail. Finally, we provide a synthesis about the important open problems of this subfield.","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115990554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Index 指数
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.023
{"title":"Index","authors":"","doi":"10.1017/9781108924184.023","DOIUrl":"https://doi.org/10.1017/9781108924184.023","url":null,"abstract":"","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116456942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Neural Networks 图神经网络
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.009
Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, Le Song
Deep Learning has become one of the most dominant approaches in Artificial Intelligence research today. Although conventional deep learning techniques have achieved huge successes on Euclidean data such as images, or sequence data such as text, there are many applications that are naturally or best represented with a graph structure. This gap has driven a tide in research for deep learning on graphs, among them Graph Neural Networks (GNNs) are the most successful in coping with various learning tasks across a large number of application domains. In this chapter, we will systematically organize the existing research of GNNs along three axes: foundations, frontiers, and applications. We will introduce the fundamental aspects of GNNs ranging from the popular models and their expressive powers, to the scalability, interpretability and robustness of GNNs. Then, we will discuss various frontier research, ranging from graph classification and link prediction, to graph generation and transformation, graph matching and graph structure learning. Based on them, we further summarize the basic procedures which exploit full use of various GNNs for a large number of applications. Finally, we provide the organization of our book and summarize the roadmap of the various research topics of GNNs. Lingfei Wu JD.COM Silicon Valley Research Center, e-mail: lwu@email.wm.edu Peng Cui Department of Computer Science, Tsinghua University, e-mail: cuip@tsinghua.edu.cn Jian Pei Department of Computer Science, Simon Fraser University, e-mail: jpei@cs.sfu.ca Liang Zhao Department of Computer Science, Emory University, e-mail: liang.zhao@emory.edu Le Song Mohamed bin Zayed University of Artificial Intelligence, e-mail: dasongle@gmail.com
深度学习已经成为当今人工智能研究中最主要的方法之一。尽管传统的深度学习技术已经在欧几里得数据(如图像)或序列数据(如文本)上取得了巨大的成功,但仍有许多应用自然地或最好地用图结构来表示。这一差距推动了图上深度学习的研究浪潮,其中图神经网络(gnn)在处理大量应用领域的各种学习任务方面最为成功。在本章中,我们将从基础、前沿和应用三个方面系统地组织gnn的现有研究。我们将介绍gnn的基本方面,从流行的模型和它们的表达能力,到gnn的可扩展性、可解释性和鲁棒性。然后,我们将讨论各种前沿研究,从图分类和链接预测,到图生成和转换,图匹配和图结构学习。在此基础上,我们进一步总结了在大量应用中充分利用各种gnn的基本步骤。最后,我们提供了本书的组织,并总结了gnn的各种研究主题的路线图。吴凌飞京东硅谷研究中心,e-mail: lwu@email.wm.edu崔鹏清华大学计算机系,e-mail: cuip@tsinghua.edu.cn裴健西蒙弗雷泽大学计算机系,e-mail: jpei@cs.sfu.ca赵亮埃默里大学计算机系,e-mail: liang.zhao@emory.edu宋乐穆罕默德本扎耶德人工智能大学,e-mail: dasongle@gmail.com
{"title":"Graph Neural Networks","authors":"Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, Le Song","doi":"10.1017/9781108924184.009","DOIUrl":"https://doi.org/10.1017/9781108924184.009","url":null,"abstract":"Deep Learning has become one of the most dominant approaches in Artificial Intelligence research today. Although conventional deep learning techniques have achieved huge successes on Euclidean data such as images, or sequence data such as text, there are many applications that are naturally or best represented with a graph structure. This gap has driven a tide in research for deep learning on graphs, among them Graph Neural Networks (GNNs) are the most successful in coping with various learning tasks across a large number of application domains. In this chapter, we will systematically organize the existing research of GNNs along three axes: foundations, frontiers, and applications. We will introduce the fundamental aspects of GNNs ranging from the popular models and their expressive powers, to the scalability, interpretability and robustness of GNNs. Then, we will discuss various frontier research, ranging from graph classification and link prediction, to graph generation and transformation, graph matching and graph structure learning. Based on them, we further summarize the basic procedures which exploit full use of various GNNs for a large number of applications. Finally, we provide the organization of our book and summarize the roadmap of the various research topics of GNNs. Lingfei Wu JD.COM Silicon Valley Research Center, e-mail: lwu@email.wm.edu Peng Cui Department of Computer Science, Tsinghua University, e-mail: cuip@tsinghua.edu.cn Jian Pei Department of Computer Science, Simon Fraser University, e-mail: jpei@cs.sfu.ca Liang Zhao Department of Computer Science, Emory University, e-mail: liang.zhao@emory.edu Le Song Mohamed bin Zayed University of Artificial Intelligence, e-mail: dasongle@gmail.com","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125995708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Deep Learning on Graphs: An Introduction 图上的深度学习:导论
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.003
{"title":"Deep Learning on Graphs: An Introduction","authors":"","doi":"10.1017/9781108924184.003","DOIUrl":"https://doi.org/10.1017/9781108924184.003","url":null,"abstract":"","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128322969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Neural Networks for Complex Graphs 复杂图的图神经网络
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.012
In the earlier chapters, we have discussed graph neural network models focusing on simple graphs where the graphs are static and have only one type of nodes and one type of edges. However, graphs in many real-world applications are much more complicated. They typically have multiple types of nodes, edges, unique structures, and often are dynamic. As a consequence, these complex graphs present more intricate patterns that are beyond the capacity of the aforementioned graph neural network models on simple graphs. Thus, dedicated efforts are desired to design graph neural network models for complex graphs. These efforts can significantly impact the successful adoption and use of GNNs in a broader range of applications. In this chapter, using complex graphs introduced in Section 2.6 as examples, we discuss the methods to extend the graph neural network models to capture more sophisticated patterns. More specifically, we describe more advanced graph filters designed for complex graphs to capture their specific properties.
在前面的章节中,我们讨论了关注简单图的图神经网络模型,其中图是静态的,只有一种类型的节点和一种类型的边。然而,许多实际应用程序中的图形要复杂得多。它们通常具有多种类型的节点、边、独特的结构,并且通常是动态的。因此,这些复杂的图呈现出更复杂的模式,超出了前面提到的简单图上的图神经网络模型的能力。因此,需要专门的努力来设计复杂图的图神经网络模型。这些努力可以显著影响gnn在更广泛应用中的成功采用和使用。在本章中,我们将以2.6节介绍的复杂图为例,讨论扩展图神经网络模型以捕获更复杂模式的方法。更具体地说,我们描述了为复杂图设计的更高级的图过滤器,以捕获它们的特定属性。
{"title":"Graph Neural Networks for Complex Graphs","authors":"","doi":"10.1017/9781108924184.012","DOIUrl":"https://doi.org/10.1017/9781108924184.012","url":null,"abstract":"In the earlier chapters, we have discussed graph neural network models focusing on simple graphs where the graphs are static and have only one type of nodes and one type of edges. However, graphs in many real-world applications are much more complicated. They typically have multiple types of nodes, edges, unique structures, and often are dynamic. As a consequence, these complex graphs present more intricate patterns that are beyond the capacity of the aforementioned graph neural network models on simple graphs. Thus, dedicated efforts are desired to design graph neural network models for complex graphs. These efforts can significantly impact the successful adoption and use of GNNs in a broader range of applications. In this chapter, using complex graphs introduced in Section 2.6 as examples, we discuss the methods to extend the graph neural network models to capture more sophisticated patterns. More specifically, we describe more advanced graph filters designed for complex graphs to capture their specific properties.","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128295450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Graph Neural Networks 可扩展图神经网络
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.011
{"title":"Scalable Graph Neural Networks","authors":"","doi":"10.1017/9781108924184.011","DOIUrl":"https://doi.org/10.1017/9781108924184.011","url":null,"abstract":"","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132389463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond GNNs: More Deep Models on Graphs 超越GNNs:图上的更深度模型
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.013
{"title":"Beyond GNNs: More Deep Models on Graphs","authors":"","doi":"10.1017/9781108924184.013","DOIUrl":"https://doi.org/10.1017/9781108924184.013","url":null,"abstract":"","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121068760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Graph Neural Networks 鲁棒图神经网络
Pub Date : 2021-09-30 DOI: 10.1017/9781108924184.010
As the generalizations of traditional DNNs to graphs, GNNs inherit both advantages and disadvantages of traditional DNNs. Like traditional DNNs, GNNs have been shown to be effective in many graph-related tasks such as nodefocused and graph-focused tasks. Traditional DNNs have been demonstrated to be vulnerable to dedicated designed adversarial attacks (Goodfellow et al., 2014b; Xu et al., 2019b). Under adversarial attacks, the victimized samples are perturbed in such a way that they are not easily noticeable, but they can lead to wrong results. It is increasingly evident that GNNs also inherit this drawback. The adversary can generate graph adversarial perturbations by manipulating the graph structure or node features to fool the GNN models. This limitation of GNNs has arisen immense concerns on adopting them in safety-critical applications such as financial systems and risk management. For example, in a credit scoring system, fraudsters can fake connections with several high-credit customers to evade the fraudster detection models; and spammers can easily create fake followers to increase the chance of fake news being recommended and spread. Therefore, we have witnessed more and more research attention to graph adversarial attacks and their countermeasures. In this chapter, we first introduce concepts and definitions of graph adversarial attacks and detail some representative adversarial attack methods on graphs. Then, we discuss representative defense techniques against these adversarial attacks.
作为传统深度神经网络对图的推广,gnn继承了传统深度神经网络的优点和缺点。与传统的深度神经网络一样,gnn已被证明在许多与图相关的任务中是有效的,例如节点分散和以图为中心的任务。传统的深度神经网络已被证明容易受到专门设计的对抗性攻击(Goodfellow等人,2014;Xu et al., 2019b)。在对抗性攻击下,受害样本受到干扰,不容易被注意到,但它们可能导致错误的结果。越来越明显的是,gnn也继承了这个缺点。攻击者可以通过操纵图结构或节点特征来产生图对抗性扰动来欺骗GNN模型。gnn的这种局限性引起了人们对在金融系统和风险管理等安全关键应用中采用它们的极大关注。例如,在信用评分系统中,欺诈者可以伪造与多个高信用客户的联系,以逃避欺诈者检测模型;垃圾邮件发送者可以很容易地创建假粉丝,以增加假新闻被推荐和传播的机会。因此,对图对抗攻击及其对策的研究越来越受到重视。在本章中,我们首先介绍了图对抗攻击的概念和定义,并详细介绍了一些典型的图对抗攻击方法。然后,我们讨论了针对这些对抗性攻击的代表性防御技术。
{"title":"Robust Graph Neural Networks","authors":"","doi":"10.1017/9781108924184.010","DOIUrl":"https://doi.org/10.1017/9781108924184.010","url":null,"abstract":"As the generalizations of traditional DNNs to graphs, GNNs inherit both advantages and disadvantages of traditional DNNs. Like traditional DNNs, GNNs have been shown to be effective in many graph-related tasks such as nodefocused and graph-focused tasks. Traditional DNNs have been demonstrated to be vulnerable to dedicated designed adversarial attacks (Goodfellow et al., 2014b; Xu et al., 2019b). Under adversarial attacks, the victimized samples are perturbed in such a way that they are not easily noticeable, but they can lead to wrong results. It is increasingly evident that GNNs also inherit this drawback. The adversary can generate graph adversarial perturbations by manipulating the graph structure or node features to fool the GNN models. This limitation of GNNs has arisen immense concerns on adopting them in safety-critical applications such as financial systems and risk management. For example, in a credit scoring system, fraudsters can fake connections with several high-credit customers to evade the fraudster detection models; and spammers can easily create fake followers to increase the chance of fake news being recommended and spread. Therefore, we have witnessed more and more research attention to graph adversarial attacks and their countermeasures. In this chapter, we first introduce concepts and definitions of graph adversarial attacks and detail some representative adversarial attack methods on graphs. Then, we discuss representative defense techniques against these adversarial attacks.","PeriodicalId":254746,"journal":{"name":"Deep Learning on Graphs","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114967646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Deep Learning on Graphs
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1