首页 > 最新文献

Semi-Supervised Learning最新文献

英文 中文
Prediction of Protein Function from Networks 基于网络的蛋白质功能预测
Pub Date : 2006-11-01 DOI: 10.7551/MITPRESS/9780262033589.003.0020
Hyunjung Shin, K. Tsuda
This chapter contains sections titled: Introduction, Graph-Based Semi-Supervised Learning, Combining Multiple Graphs, Experiments on Function Prediction of Proteins, Conclusion and Outlook
本章包含以下章节:引言、基于图的半监督学习、多图结合、蛋白质功能预测实验、结论和展望
{"title":"Prediction of Protein Function from Networks","authors":"Hyunjung Shin, K. Tsuda","doi":"10.7551/MITPRESS/9780262033589.003.0020","DOIUrl":"https://doi.org/10.7551/MITPRESS/9780262033589.003.0020","url":null,"abstract":"This chapter contains sections titled: Introduction, Graph-Based Semi-Supervised Learning, Combining Multiple Graphs, Experiments on Function Prediction of Proteins, Conclusion and Outlook","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125326803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Graph Kernels by Spectral Transforms 谱变换的图核
Pub Date : 2006-09-16 DOI: 10.7551/mitpress/9780262033589.003.0015
Xiaojin Zhu, J. Kandola, J. Lafferty, Zoubin Ghahramani
Many graph-based semi-supervised learning methods can be viewed as imposing smoothness conditions on the target function with respect to a graph representing the data points to be labeled. The smoothness properties of the functions are encoded in terms of Mercer kernels over the graph. The central quantity in such regularization is the spectral decomposition of the graph Laplacian, a matrix derived from the graph's edge weights. The eigenvectors with small eigenvalues are smooth, and ideally represent large cluster structures within the data. The eigenvectors having large eigenvalues are rugged, and considered noise. Different weightings of the eigenvectors of the graph Laplacian lead to different measures of smoothness. Such weightings can be viewed as spectral transforms, that is, as transformations of the standard eigenspectrum that lead to different regularizers over the graph. Familiar kernels, such as the diffusion kernel resulting by solving a discrete heat equation on the graph, can be seen as simple parametric spectral transforms. The question naturally arises whether one can obtain effective spectral transforms automatically. In this paper we develop an approach to searching over a nonparametric family of spectral transforms by using convex optimization to maximize kernel alignment to the labeled data. Order constraints are imposed to encode a preference for smoothness with respect to the graph structure. This results in a flexible family of kernels that is more data-driven than the standard parametric spectral transforms. Our approach relies on a quadratically constrained quadratic program (QCQP), and is computationally practical for large datasets.
许多基于图的半监督学习方法可以被看作是对表示待标记数据点的图的目标函数施加平滑条件。函数的平滑性是根据图上的Mercer核进行编码的。这种正则化的中心量是图拉普拉斯函数的谱分解,它是由图的边权导出的矩阵。具有小特征值的特征向量是光滑的,并且理想地表示数据内的大簇结构。具有较大特征值的特征向量是粗糙的,被认为是噪声。图拉普拉斯特征向量的不同权重导致不同的平滑度量。这样的加权可以看作是谱变换,也就是说,作为标准特征谱的变换,导致图上不同的正则化。我们所熟悉的核函数,如在图上求解离散热方程得到的扩散核函数,可以看作是简单的参数谱变换。人们自然会提出一个问题:能否自动得到有效的谱变换?在本文中,我们提出了一种搜索非参数谱变换族的方法,使用凸优化来最大化核对齐到标记数据。顺序约束是为了编码相对于图结构的平滑偏好。这导致了一个灵活的核族,它比标准参数谱变换更受数据驱动。我们的方法依赖于二次约束二次规划(QCQP),并且对于大型数据集具有计算实用性。
{"title":"Graph Kernels by Spectral Transforms","authors":"Xiaojin Zhu, J. Kandola, J. Lafferty, Zoubin Ghahramani","doi":"10.7551/mitpress/9780262033589.003.0015","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0015","url":null,"abstract":"Many graph-based semi-supervised learning methods can be viewed as imposing smoothness conditions on the target function with respect to a graph representing the data points to be labeled. The smoothness properties of the functions are encoded in terms of Mercer kernels over the graph. The central quantity in such regularization is the spectral decomposition of the graph Laplacian, a matrix derived from the graph's edge weights. The eigenvectors with small eigenvalues are smooth, and ideally represent large cluster structures within the data. The eigenvectors having large eigenvalues are rugged, and considered noise. Different weightings of the eigenvectors of the graph Laplacian lead to different measures of smoothness. Such weightings can be viewed as spectral transforms, that is, as transformations of the standard eigenspectrum that lead to different regularizers over the graph. Familiar kernels, such as the diffusion kernel resulting by solving a discrete heat equation on the graph, can be seen as simple parametric spectral transforms. The question naturally arises whether one can obtain effective spectral transforms automatically. In this paper we develop an approach to searching over a nonparametric family of spectral transforms by using convex optimization to maximize kernel alignment to the labeled data. Order constraints are imposed to encode a preference for smoothness with respect to the graph structure. This results in a flexible family of kernels that is more data-driven than the standard parametric spectral transforms. Our approach relies on a quadratically constrained quadratic program (QCQP), and is computationally practical for large datasets.","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131491314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Probabilistic Semi-Supervised Clustering with Constraints 带约束的概率半监督聚类
Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0005
Sugato Basu, M. Bilenko, A. Banerjee, R. Mooney
In certain clustering tasks it is possible to obtain limited supervision in the form of pairwise constraints, i.e., pairs of instances labeled as belonging to same or different clusters. The resulting problem is known as semi-supervised clustering, an instance of semi-supervised learning stemming from a traditional unsupervised learning setting. Several algorithms exist for enhancing clustering quality by using supervision in the form of constraints. These algorithms typically utilize the pairwise constraints to either modify the clustering objective function or to learn the clustering distortion measure. This chapter describes an approach that employs Hidden Markov Random Fields (HMRFs) as a probabilistic generative model for semi-supervised clustering, thereby providing a principled framework for incorporating constraint-based supervision into prototype-based clustering. The HMRF-based model allows the use of a broad range of clustering distortion measures, including Bregman divergences (e.g., squared Euclidean distance, KL divergence) and directional distance measures (e.g., cosine distance), making it applicable to a number of domains. The model leads to the HMRF-KMeans algorithm which minimizes an objective function derived from the joint probability of the model, and allows unification of constraint-based and distance-based semi-supervised clustering methods. Additionally, a two-phase active learning algorithm for selecting informative pairwise constraints in a querydriven framework is derived from the HMRF model, facilitating improved clustering performance with relatively small amounts of supervision from the user.
在某些聚类任务中,有可能以成对约束的形式获得有限的监督,即标记为属于相同或不同聚类的成对实例。由此产生的问题被称为半监督聚类,这是源于传统无监督学习设置的半监督学习实例。已有几种算法通过约束形式的监督来提高聚类质量。这些算法通常利用成对约束来修改聚类目标函数或学习聚类失真度量。本章描述了一种采用隐马尔可夫随机场(hmrf)作为半监督聚类的概率生成模型的方法,从而为将基于约束的监督纳入基于原型的聚类提供了一个原则框架。基于hmrf的模型允许使用广泛的聚类失真度量,包括Bregman散度(例如,平方欧几里得距离,KL散度)和定向距离度量(例如,余弦距离),使其适用于许多领域。该模型导致了HMRF-KMeans算法,该算法最小化了由模型的联合概率导出的目标函数,并允许基于约束和基于距离的半监督聚类方法的统一。此外,从HMRF模型派生出一种用于在查询驱动框架中选择信息成对约束的两阶段主动学习算法,在用户监督相对较少的情况下促进了聚类性能的提高。
{"title":"Probabilistic Semi-Supervised Clustering with Constraints","authors":"Sugato Basu, M. Bilenko, A. Banerjee, R. Mooney","doi":"10.7551/mitpress/9780262033589.003.0005","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0005","url":null,"abstract":"In certain clustering tasks it is possible to obtain limited supervision in the form of pairwise constraints, i.e., pairs of instances labeled as belonging to same or different clusters. The resulting problem is known as semi-supervised clustering, an instance of semi-supervised learning stemming from a traditional unsupervised learning setting. Several algorithms exist for enhancing clustering quality by using supervision in the form of constraints. These algorithms typically utilize the pairwise constraints to either modify the clustering objective function or to learn the clustering distortion measure. This chapter describes an approach that employs Hidden Markov Random Fields (HMRFs) as a probabilistic generative model for semi-supervised clustering, thereby providing a principled framework for incorporating constraint-based supervision into prototype-based clustering. The HMRF-based model allows the use of a broad range of clustering distortion measures, including Bregman divergences (e.g., squared Euclidean distance, KL divergence) and directional distance measures (e.g., cosine distance), making it applicable to a number of domains. The model leads to the HMRF-KMeans algorithm which minimizes an objective function derived from the joint probability of the model, and allows unification of constraint-based and distance-based semi-supervised clustering methods. Additionally, a two-phase active learning algorithm for selecting informative pairwise constraints in a querydriven framework is derived from the HMRF model, facilitating improved clustering performance with relatively small amounts of supervision from the user.","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123133957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Metric-Based Approaches for Semi-Supervised Regression and Classification 基于度量的半监督回归和分类方法
Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0023
D. Schuurmans, F. Southey, Dana F. Wilkinson, Yuhong Guo
Semi-supervised learning methods typically require an explicit relationship to be asserted between the labeled and unlabeled data—as illustrated, for example, by the neighbourhoods used in graph-based methods. Semi-supervised model selection and regularization methods are presented here that instead require only that the labeled and unlabeled data are drawn from the same distribution. From this assumption, a metric can be constructed over hypotheses based on their predictions for unlabeled data. This metric can then be used to detect untrustworthy training error estimates, leading to model selection strategies that select the richest hypothesis class while providing theoretical guarantees against over-fitting. This general approach is then adapted to regularization for supervised regression and supervised classification with probabilistic classifiers. The regularization adapts not only to the hypothesis class but also to the specific data sample provided, allowing for better performance than regularizers that account only for class complexity.
半监督学习方法通常需要在标记数据和未标记数据之间断言一个明确的关系,例如,基于图的方法中使用的邻域。本文提出了半监督模型选择和正则化方法,这些方法只需要从相同的分布中提取标记和未标记的数据。从这个假设出发,可以根据他们对未标记数据的预测在假设之上构建度量。然后,这个度量可以用来检测不可信的训练误差估计,从而产生选择最丰富的假设类的模型选择策略,同时提供防止过度拟合的理论保证。然后将这种一般方法用于正则化监督回归和概率分类器的监督分类。正则化不仅适用于假设类,还适用于提供的特定数据样本,从而比只考虑类复杂性的正则化器具有更好的性能。
{"title":"Metric-Based Approaches for Semi-Supervised Regression and Classification","authors":"D. Schuurmans, F. Southey, Dana F. Wilkinson, Yuhong Guo","doi":"10.7551/mitpress/9780262033589.003.0023","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0023","url":null,"abstract":"Semi-supervised learning methods typically require an explicit relationship to be asserted between the labeled and unlabeled data—as illustrated, for example, by the neighbourhoods used in graph-based methods. Semi-supervised model selection and regularization methods are presented here that instead require only that the labeled and unlabeled data are drawn from the same distribution. From this assumption, a metric can be constructed over hypotheses based on their predictions for unlabeled data. This metric can then be used to detect untrustworthy training error estimates, leading to model selection strategies that select the richest hypothesis class while providing theoretical guarantees against over-fitting. This general approach is then adapted to regularization for supervised regression and supervised classification with probabilistic classifiers. The regularization adapts not only to the hypothesis class but also to the specific data sample provided, allowing for better performance than regularizers that account only for class complexity.","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116678996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Label Propagation and Quadratic Criterion 标签传播与二次准则
Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0011
Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux
{"title":"Label Propagation and Quadratic Criterion","authors":"Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux","doi":"10.7551/mitpress/9780262033589.003.0011","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0011","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127598086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
The Geometric Basis of Semi-Supervised Learning 半监督学习的几何基础
Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0012
Vikas Sindhwani, M. Belkin, P. Niyogi
{"title":"The Geometric Basis of Semi-Supervised Learning","authors":"Vikas Sindhwani, M. Belkin, P. Niyogi","doi":"10.7551/mitpress/9780262033589.003.0012","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0012","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134070737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Entropy Regularization 熵正则化
Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0009
Yves Grandvalet, Yoshua Bengio
{"title":"Entropy Regularization","authors":"Yves Grandvalet, Yoshua Bengio","doi":"10.7551/mitpress/9780262033589.003.0009","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0009","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121206315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Gaussian Processes and the Null-Category Noise Model 高斯过程与零类噪声模型
Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0008
Neil D. Lawrence, Michael I. Jordan
{"title":"Gaussian Processes and the Null-Category Noise Model","authors":"Neil D. Lawrence, Michael I. Jordan","doi":"10.7551/mitpress/9780262033589.003.0008","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0008","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128926455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Transductive Inference and Semi-Supervised Learning 传导推理和半监督学习
Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0024
V. Vapnik
This chapter discusses the difference between transductive inference and semi-supervised learning. It argues that transductive inference captures the intrinsic properties of the mechanism for extracting additional information from the unla-beled data. It also shows an important role of transduction for creating noninductive models of inference. Let us start with the formal problem setting for transductive inference and semi-supervised learning. and a sequence of k test vectors, find among an admissible set of binary vectors, 1. These remarks were inspired by the discussion, What is the Difference between Trans-ductive Inference and Semi-Supervised Learning?, that took place during a workshop close to Tübingen, Germany (May 24, 2005).
本章讨论了传导推理和半监督学习之间的区别。它认为,转换推理捕获了从未标记数据中提取附加信息的机制的内在属性。它还显示了转导在创建非归纳推理模型中的重要作用。让我们从转换推理和半监督学习的正式问题设置开始。一个由k个检验向量组成的序列,在一个可容许的二进制向量集合中求出1。这些评论的灵感来自于“传导推理和半监督学习的区别是什么?”2005年5月24日,在德国宾根附近的一个研讨会上。
{"title":"Transductive Inference and Semi-Supervised Learning","authors":"V. Vapnik","doi":"10.7551/mitpress/9780262033589.003.0024","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0024","url":null,"abstract":"This chapter discusses the difference between transductive inference and semi-supervised learning. It argues that transductive inference captures the intrinsic properties of the mechanism for extracting additional information from the unla-beled data. It also shows an important role of transduction for creating noninductive models of inference. Let us start with the formal problem setting for transductive inference and semi-supervised learning. and a sequence of k test vectors, find among an admissible set of binary vectors, 1. These remarks were inspired by the discussion, What is the Difference between Trans-ductive Inference and Semi-Supervised Learning?, that took place during a workshop close to Tübingen, Germany (May 24, 2005).","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127136211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Large-Scale Algorithms 大规模的算法
Pub Date : 1900-01-01 DOI: 10.7551/mitpress/9780262033589.003.0018
Olivier Delalleau, Yoshua Bengio, Nicolas Le Roux
{"title":"Large-Scale Algorithms","authors":"Olivier Delalleau, Yoshua Bengio, Nicolas Le Roux","doi":"10.7551/mitpress/9780262033589.003.0018","DOIUrl":"https://doi.org/10.7551/mitpress/9780262033589.003.0018","url":null,"abstract":"","PeriodicalId":345393,"journal":{"name":"Semi-Supervised Learning","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123890726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Semi-Supervised Learning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1