首页 > 最新文献

Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning最新文献

英文 中文
Using Word Embedding to Enable Semantic Queries in Relational Databases 在关系数据库中使用词嵌入实现语义查询
R. Bordawekar, O. Shmueli
We investigate opportunities for exploiting Artificial Intelligence (AI) techniques for enhancing capabilities of relational databases. In particular, we explore applications of Natural Language Processing (NLP) techniques to endow relational databases with capabilities that were very hard to realize in practice. We apply an unsupervised neural-network based NLP idea, Distributed Representation via Word Embedding, to extract latent information from a relational table. The word embedding model is based on meaningful textual view of a relational database and captures inter-/intra-attribute relationships between database tokens. For each database token, the model includes a vector that encodes these contextual semantic relationships. These vectors enable processing a new class of SQL-based business intelligence queries called cognitive intelligence (CI) queries that use the generated vectors to analyze contextual semantic relationships between database tokens. The cognitive capabilities enable complex queries such as semantic matching, reasoning queries such as analogies, predictive queries using entities not present in a database, and using knowledge from external sources.
我们研究了利用人工智能(AI)技术来增强关系数据库能力的机会。特别是,我们探索了自然语言处理(NLP)技术的应用,以赋予关系数据库在实践中很难实现的功能。我们采用了一种基于无监督神经网络的NLP思想,即通过词嵌入的分布式表示,从关系表中提取潜在信息。词嵌入模型基于关系数据库的有意义的文本视图,并捕获数据库令牌之间的属性间/属性内关系。对于每个数据库令牌,该模型包括一个对这些上下文语义关系进行编码的向量。这些向量支持处理一类新的基于sql的业务智能查询,称为认知智能(CI)查询,这些查询使用生成的向量来分析数据库令牌之间的上下文语义关系。认知功能支持复杂查询(如语义匹配)、推理查询(如类比)、使用数据库中不存在的实体的预测查询以及使用来自外部源的知识。
{"title":"Using Word Embedding to Enable Semantic Queries in Relational Databases","authors":"R. Bordawekar, O. Shmueli","doi":"10.1145/3076246.3076251","DOIUrl":"https://doi.org/10.1145/3076246.3076251","url":null,"abstract":"We investigate opportunities for exploiting Artificial Intelligence (AI) techniques for enhancing capabilities of relational databases. In particular, we explore applications of Natural Language Processing (NLP) techniques to endow relational databases with capabilities that were very hard to realize in practice. We apply an unsupervised neural-network based NLP idea, Distributed Representation via Word Embedding, to extract latent information from a relational table. The word embedding model is based on meaningful textual view of a relational database and captures inter-/intra-attribute relationships between database tokens. For each database token, the model includes a vector that encodes these contextual semantic relationships. These vectors enable processing a new class of SQL-based business intelligence queries called cognitive intelligence (CI) queries that use the generated vectors to analyze contextual semantic relationships between database tokens. The cognitive capabilities enable complex queries such as semantic matching, reasoning queries such as analogies, predictive queries using entities not present in a database, and using knowledge from external sources.","PeriodicalId":118931,"journal":{"name":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
EMT: End To End Model Training for MSR Machine Translation EMT: MSR机器翻译的端到端模型培训
Vishal Chowdhary, Scott Greenwood
Machine translation, at its core, is a Machine Learning (ML) problem that involves learning language translation by looking at large amounts of parallel data i.e. translations of the same dataset in two or more languages. If we have parallel data between languages L1 and L2, we can build translation systems between these two languages. When training a complete system, we train several different models, each containing a different type of information about either one of the languages or the relationship between the two. We end up training thousands of models to support hundreds of languages. In this article, we explain our end to end architecture of automatically training and deploying models at scale. The goal of this project is to create a fully automated system responsible for gathering new data, training systems, and shipping them to production with little or no guidance from an administrator. By using the ever changing and always expanding contents of the web, we have a system that can quietly improve our existing systems over time. In this article, we detail the architecture and talk about the various problems and the solutions we arrived upon. Finally, we demonstrate experiments and data showing the impact of our work. Specifically, this system has enabled us to ship much more frequently and eliminate human errors which happen when running repetitive tasks. The principles of this pipeline can be applied to any ML training and deployment system.
机器翻译的核心是一个机器学习(ML)问题,它涉及通过查看大量并行数据来学习语言翻译,即用两种或多种语言翻译相同的数据集。如果我们有L1和L2语言之间的并行数据,我们可以在这两种语言之间建立翻译系统。当训练一个完整的系统时,我们训练几个不同的模型,每个模型包含关于一种语言或两者之间关系的不同类型的信息。我们最终训练了数千个模型来支持数百种语言。在本文中,我们将解释自动训练和大规模部署模型的端到端架构。这个项目的目标是创建一个完全自动化的系统,负责收集新数据、培训系统,并在很少或没有管理员指导的情况下将它们交付到生产环境。通过使用不断变化和不断扩展的网络内容,我们有了一个系统,可以随着时间的推移悄悄地改进我们现有的系统。在本文中,我们将详细介绍该体系结构,并讨论我们遇到的各种问题和解决方案。最后,我们展示了实验和数据,显示了我们工作的影响。具体来说,该系统使我们能够更频繁地发布产品,并消除运行重复性任务时发生的人为错误。该管道的原理可以应用于任何机器学习培训和部署系统。
{"title":"EMT: End To End Model Training for MSR Machine Translation","authors":"Vishal Chowdhary, Scott Greenwood","doi":"10.1145/3076246.3076247","DOIUrl":"https://doi.org/10.1145/3076246.3076247","url":null,"abstract":"Machine translation, at its core, is a Machine Learning (ML) problem that involves learning language translation by looking at large amounts of parallel data i.e. translations of the same dataset in two or more languages. If we have parallel data between languages L1 and L2, we can build translation systems between these two languages. When training a complete system, we train several different models, each containing a different type of information about either one of the languages or the relationship between the two. We end up training thousands of models to support hundreds of languages. In this article, we explain our end to end architecture of automatically training and deploying models at scale. The goal of this project is to create a fully automated system responsible for gathering new data, training systems, and shipping them to production with little or no guidance from an administrator. By using the ever changing and always expanding contents of the web, we have a system that can quietly improve our existing systems over time. In this article, we detail the architecture and talk about the various problems and the solutions we arrived upon. Finally, we demonstrate experiments and data showing the impact of our work. Specifically, this system has enabled us to ship much more frequently and eliminate human errors which happen when running repetitive tasks. The principles of this pipeline can be applied to any ML training and deployment system.","PeriodicalId":118931,"journal":{"name":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126191113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Versioning for End-to-End Machine Learning Pipelines 端到端机器学习管道的版本控制
T. V. D. Weide, D. Papadopoulos, O. Smirnov, Michal Zielinski, T. V. Kasteren
End-to-end machine learning pipelines that run in shared environments are challenging to implement. Production pipelines typically consist of multiple interdependent processing stages. Between stages, the intermediate results are persisted to reduce redundant computation and to improve robustness. Those results might come in the form of datasets for data processing pipelines or in the form of model coefficients in case of model training pipelines. Reusing persisted results improves efficiency but at the same time creates complicated dependencies. Every time one of the processing stages is changed, either due to code change or due to parameters change, it becomes difficult to find which datasets can be reused and which should be recomputed. In this paper we build upon previous work to produce derivations of datasets to ensure that multiple versions of a pipeline can run in parallel while minimizing the amount of redundant computations. Our extensions include partial derivations to simplify navigation and reuse, explicit support for schema changes of pipelines, and a central registry of running pipelines to coordinate upgrading pipelines between teams.
在共享环境中运行的端到端机器学习管道很难实现。生产管道通常由多个相互依赖的处理阶段组成。在阶段之间,中间结果被持久化以减少冗余计算并提高鲁棒性。这些结果可能以数据集的形式出现在数据处理管道中,或者以模型系数的形式出现在模型训练管道中。重用持久化结果可以提高效率,但同时也会产生复杂的依赖关系。每当其中一个处理阶段发生变化时,无论是由于代码更改还是由于参数更改,都很难发现哪些数据集可以重用,哪些数据集应该重新计算。在本文中,我们以以前的工作为基础,生成数据集的派生,以确保管道的多个版本可以并行运行,同时最大限度地减少冗余计算量。我们的扩展包括简化导航和重用的部分派生,对管道模式更改的显式支持,以及运行管道的中央注册中心,以协调团队之间的管道升级。
{"title":"Versioning for End-to-End Machine Learning Pipelines","authors":"T. V. D. Weide, D. Papadopoulos, O. Smirnov, Michal Zielinski, T. V. Kasteren","doi":"10.1145/3076246.3076248","DOIUrl":"https://doi.org/10.1145/3076246.3076248","url":null,"abstract":"End-to-end machine learning pipelines that run in shared environments are challenging to implement. Production pipelines typically consist of multiple interdependent processing stages. Between stages, the intermediate results are persisted to reduce redundant computation and to improve robustness. Those results might come in the form of datasets for data processing pipelines or in the form of model coefficients in case of model training pipelines. Reusing persisted results improves efficiency but at the same time creates complicated dependencies. Every time one of the processing stages is changed, either due to code change or due to parameters change, it becomes difficult to find which datasets can be reused and which should be recomputed. In this paper we build upon previous work to produce derivations of datasets to ensure that multiple versions of a pipeline can run in parallel while minimizing the amount of redundant computations. Our extensions include partial derivations to simplify navigation and reuse, explicit support for schema changes of pipelines, and a central registry of running pipelines to coordinate upgrading pipelines between teams.","PeriodicalId":118931,"journal":{"name":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125841390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Model-based Pricing: Do Not Pay for More than What You Learn! 基于模型的定价:不要为你学到的东西付费!
Lingjiao Chen, Paraschos Koutris, Arun Kumar
While a lot of work has focused on improving the efficiency, scalability, and usability of machine learning (ML), little work has studied the cost of data acquisition for ML-based analytics. Datasets are already being bought and sold in marketplaces for various tasks, including ML. But current marketplaces force users to buy such data in whole or as fixed subsets without any awareness of the ML tasks they are used for. This leads to sub-optimal choices and missed opportunities for both data sellers and buyers. In this paper, we outline our vision for a formal and practical pricing framework we call model-based pricing that aims to resolve such issues. Our key observation is that ML users typically need only as much data as needed to meet their accuracy goals, which leads to novel trade-offs between price, accuracy, and runtimes. We explain how this raises interesting new research questions at the intersection of data management, ML, and micro-economics.
虽然很多工作都集中在提高机器学习(ML)的效率、可扩展性和可用性上,但很少有工作研究基于ML的分析的数据采集成本。数据集已经在市场上用于各种任务,包括机器学习。但目前的市场迫使用户购买整个或固定子集的数据,而不知道它们用于什么机器学习任务。这导致了数据买卖双方的次优选择和错失机会。在本文中,我们概述了我们对正式和实用的定价框架的愿景,我们称之为基于模型的定价,旨在解决这些问题。我们的关键观察是,ML用户通常只需要尽可能多的数据来满足他们的准确性目标,这导致了价格、准确性和运行时间之间的新权衡。我们解释了这如何在数据管理、机器学习和微观经济学的交叉领域提出有趣的新研究问题。
{"title":"Model-based Pricing: Do Not Pay for More than What You Learn!","authors":"Lingjiao Chen, Paraschos Koutris, Arun Kumar","doi":"10.1145/3076246.3076250","DOIUrl":"https://doi.org/10.1145/3076246.3076250","url":null,"abstract":"While a lot of work has focused on improving the efficiency, scalability, and usability of machine learning (ML), little work has studied the cost of data acquisition for ML-based analytics. Datasets are already being bought and sold in marketplaces for various tasks, including ML. But current marketplaces force users to buy such data in whole or as fixed subsets without any awareness of the ML tasks they are used for. This leads to sub-optimal choices and missed opportunities for both data sellers and buyers. In this paper, we outline our vision for a formal and practical pricing framework we call model-based pricing that aims to resolve such issues. Our key observation is that ML users typically need only as much data as needed to meet their accuracy goals, which leads to novel trade-offs between price, accuracy, and runtimes. We explain how this raises interesting new research questions at the intersection of data management, ML, and micro-economics.","PeriodicalId":118931,"journal":{"name":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130076167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards Automatically Setting Language Bias in Relational Learning 关系学习中语言偏差的自动设置
Jose Picado, Arash Termehchy, Alan Fern, Sudhanshu Pathak
Relational databases are valuable resources for learning novel and interesting relations and concepts. Relational learning algorithms learn the definition of new relations in terms of the existing relations in the database. In order to constraint the search through the large space of candidate definitions, users must specify a language bias. Unfortunately, specifying the language bias is done via trial and error and is guided by the expert's intuitions. Hence, it normally takes a great deal of time and effort to effectively use these algorithms. We report our on-going work on building AutoMode, a system that leverages information in the schema and content of the database to automatically induce the language bias used by popular relational learning algorithms.
关系数据库是学习新颖有趣的关系和概念的宝贵资源。关系学习算法根据数据库中的现有关系学习新关系的定义。为了在候选定义的大空间中约束搜索,用户必须指定语言偏差。不幸的是,指定语言偏差是通过试验和错误完成的,并由专家的直觉指导。因此,要有效地使用这些算法通常需要花费大量的时间和精力。我们报告了我们正在进行的构建AutoMode的工作,AutoMode是一个利用模式和数据库内容中的信息来自动诱导流行的关系学习算法使用的语言偏差的系统。
{"title":"Towards Automatically Setting Language Bias in Relational Learning","authors":"Jose Picado, Arash Termehchy, Alan Fern, Sudhanshu Pathak","doi":"10.1145/3076246.3076249","DOIUrl":"https://doi.org/10.1145/3076246.3076249","url":null,"abstract":"Relational databases are valuable resources for learning novel and interesting relations and concepts. Relational learning algorithms learn the definition of new relations in terms of the existing relations in the database. In order to constraint the search through the large space of candidate definitions, users must specify a language bias. Unfortunately, specifying the language bias is done via trial and error and is guided by the expert's intuitions. Hence, it normally takes a great deal of time and effort to effectively use these algorithms. We report our on-going work on building AutoMode, a system that leverages information in the schema and content of the database to automatically induce the language bias used by popular relational learning algorithms.","PeriodicalId":118931,"journal":{"name":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121609976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On Model Discovery For Hosted Data Science Projects 托管数据科学项目的模型发现
Hui Miao, Ang Li, L. Davis, A. Deshpande
Alongside developing systems for scalable machine learning and collaborative data science activities, there is an increasing trend toward publicly shared data science projects, hosted in general or dedicated hosting services, such as GitHub and DataHub. The artifacts of the hosted projects are rich and include not only text files, but also versioned datasets, trained models, project documents, etc. Under the fast pace and expectation of data science activities, model discovery, i.e., finding relevant data science projects to reuse, is an important task in the context of data management for end-to-end machine learning. In this paper, we study the task and present the ongoing work on ModelHub Discovery, a system for finding relevant models in hosted data science projects. Instead of prescribing a structured data model for data science projects, we take an information retrieval approach by decomposing the discovery task into three major steps: project query and matching, model comparison and ranking, and processing and building ensembles with returned models. We describe the motivation and desiderata, propose techniques, and present opportunities and challenges for model discovery for hosted data science projects.
除了为可扩展的机器学习和协作数据科学活动开发系统外,公共共享数据科学项目的趋势也在增加,这些项目托管在通用或专用托管服务上,如GitHub和DataHub。托管项目的工件非常丰富,不仅包括文本文件,还包括版本化的数据集、训练过的模型、项目文档等。在数据科学活动的快节奏和期望下,模型发现,即找到相关的数据科学项目进行重用,是端到端机器学习数据管理背景下的一项重要任务。在本文中,我们研究了这项任务,并介绍了ModelHub Discovery上正在进行的工作,ModelHub Discovery是一个在托管数据科学项目中查找相关模型的系统。我们没有为数据科学项目规定结构化数据模型,而是采用信息检索方法,将发现任务分解为三个主要步骤:项目查询和匹配,模型比较和排序,以及使用返回模型处理和构建集成。我们描述了动机和需求,提出了技术,并提出了托管数据科学项目模型发现的机遇和挑战。
{"title":"On Model Discovery For Hosted Data Science Projects","authors":"Hui Miao, Ang Li, L. Davis, A. Deshpande","doi":"10.1145/3076246.3076252","DOIUrl":"https://doi.org/10.1145/3076246.3076252","url":null,"abstract":"Alongside developing systems for scalable machine learning and collaborative data science activities, there is an increasing trend toward publicly shared data science projects, hosted in general or dedicated hosting services, such as GitHub and DataHub. The artifacts of the hosted projects are rich and include not only text files, but also versioned datasets, trained models, project documents, etc. Under the fast pace and expectation of data science activities, model discovery, i.e., finding relevant data science projects to reuse, is an important task in the context of data management for end-to-end machine learning. In this paper, we study the task and present the ongoing work on ModelHub Discovery, a system for finding relevant models in hosted data science projects. Instead of prescribing a structured data model for data science projects, we take an information retrieval approach by decomposing the discovery task into three major steps: project query and matching, model comparison and ranking, and processing and building ensembles with returned models. We describe the motivation and desiderata, propose techniques, and present opportunities and challenges for model discovery for hosted data science projects.","PeriodicalId":118931,"journal":{"name":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130860052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning 第一届端到端机器学习数据管理研讨会论文集
{"title":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","authors":"","doi":"10.1145/3076246","DOIUrl":"https://doi.org/10.1145/3076246","url":null,"abstract":"","PeriodicalId":118931,"journal":{"name":"Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129472533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1