首页 > 最新文献

LivingLab '13最新文献

英文 中文
A month in the life of a production news recommender system 一个月的生产新闻推荐系统的生命
Pub Date : 2013-11-01 DOI: 10.1145/2513150.2513159
A. Said, Jimmy J. Lin, Alejandro Bellogín, A. D. Vries
During the last decade, recommender systems have become a ubiquitous feature in the online world. Research on systems and algorithms in this area has flourished, leading to novel techniques for personalization and recommendation. The evaluation of recommender systems, however, has not seen similar progress---techniques have changed little since the advent of recommender systems, when evaluation methodologies were "borrowed" from related research areas. As an effort to move evaluation methodology forward, this paper describes a production recommender system infrastructure that allows research systems to be evaluated in situ, by real-world metrics such as user clickthrough. We present an analysis of one month of interactions with this infrastructure and share our findings.
在过去的十年中,推荐系统已经成为网络世界中无处不在的功能。该领域的系统和算法研究蓬勃发展,导致了个性化和推荐的新技术。然而,对推荐系统的评估并没有看到类似的进展——自从推荐系统出现以来,技术几乎没有变化,当时的评估方法是从相关研究领域“借用”的。为了推动评估方法向前发展,本文描述了一个产品推荐系统基础设施,该基础设施允许通过用户点击等现实世界指标对研究系统进行现场评估。我们对一个月来与该基础设施的互动进行了分析,并分享了我们的发现。
{"title":"A month in the life of a production news recommender system","authors":"A. Said, Jimmy J. Lin, Alejandro Bellogín, A. D. Vries","doi":"10.1145/2513150.2513159","DOIUrl":"https://doi.org/10.1145/2513150.2513159","url":null,"abstract":"During the last decade, recommender systems have become a ubiquitous feature in the online world. Research on systems and algorithms in this area has flourished, leading to novel techniques for personalization and recommendation. The evaluation of recommender systems, however, has not seen similar progress---techniques have changed little since the advent of recommender systems, when evaluation methodologies were \"borrowed\" from related research areas. As an effort to move evaluation methodology forward, this paper describes a production recommender system infrastructure that allows research systems to be evaluated in situ, by real-world metrics such as user clickthrough. We present an analysis of one month of interactions with this infrastructure and share our findings.","PeriodicalId":436800,"journal":{"name":"LivingLab '13","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128398206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Evaluation for operational IR applications: generalizability and automation 可操作IR应用的评估:通用性和自动化
Pub Date : 2013-11-01 DOI: 10.1145/2513150.2513160
Melanie Imhof, Martin Braschler, P. Hansen, Stefan Rietberger
Black box information retrieval (IR) application evaluation allows practitioners to measure the quality of their IR application. Instead of evaluating specific components, e.g. solely the search engine, a complete IR application, including the user's perspective, is evaluated. The evaluation methodology is designed to be applicable to operational IR applications. The black box evaluation methodology could be packaged into an evaluation and monitoring tool, making it usable for industry stakeholders. The tool should lead practitioners through the evaluation process and maintain the test results for the manual and automatic tests. This paper shows that the methodology is generalizable, even though the diversity of IR applications is high. The challenges in automating tests are the simulation of tasks that require intellectual effort and the handling of different visualizations of the same concept.
黑盒信息检索(IR)应用程序评估允许从业者衡量其IR应用程序的质量。而不是评估特定的组件,例如单独的搜索引擎,一个完整的IR应用程序,包括用户的视角,被评估。评估方法的设计是为了适用于实际的IR应用。黑盒评估方法可以打包成评估和监控工具,使其可用于行业利益相关者。该工具应该引导从业者通过评估过程,并维护手动和自动测试的测试结果。本文表明,尽管红外应用的多样性很高,但该方法是可推广的。自动化测试的挑战在于模拟需要脑力劳动的任务,以及处理同一概念的不同可视化。
{"title":"Evaluation for operational IR applications: generalizability and automation","authors":"Melanie Imhof, Martin Braschler, P. Hansen, Stefan Rietberger","doi":"10.1145/2513150.2513160","DOIUrl":"https://doi.org/10.1145/2513150.2513160","url":null,"abstract":"Black box information retrieval (IR) application evaluation allows practitioners to measure the quality of their IR application. Instead of evaluating specific components, e.g. solely the search engine, a complete IR application, including the user's perspective, is evaluated. The evaluation methodology is designed to be applicable to operational IR applications. The black box evaluation methodology could be packaged into an evaluation and monitoring tool, making it usable for industry stakeholders. The tool should lead practitioners through the evaluation process and maintain the test results for the manual and automatic tests. This paper shows that the methodology is generalizable, even though the diversity of IR applications is high. The challenges in automating tests are the simulation of tasks that require intellectual effort and the handling of different visualizations of the same concept.","PeriodicalId":436800,"journal":{"name":"LivingLab '13","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124915751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using CrowdLogger for in situ information retrieval system evaluation 利用CrowdLogger对现场信息检索系统进行评价
Pub Date : 2013-11-01 DOI: 10.1145/2513150.2513164
H. Feild, James Allan
A major hurdle faced by many information retrieval researchers---especially in academia---is evaluating retrieval systems in the wild. Challenges include tapping into large user bases, collecting user behavior, and modifying a given retrieval system. We outline several options available to researchers to overcome these challenges along with their advantages and disadvantages. We then demonstrate how CrowdLogger, an open-source browser extension for Firefox and Google Chrome, can be used as an in situ evaluation platform.
许多信息检索研究人员——尤其是学术界的研究人员——面临的一个主要障碍是评估野外检索系统。挑战包括挖掘庞大的用户基础、收集用户行为和修改给定的检索系统。我们概述了可供研究人员克服这些挑战的几种选择以及它们的优点和缺点。然后,我们演示如何CrowdLogger,一个开源浏览器扩展Firefox和谷歌Chrome,可以用作一个原位评估平台。
{"title":"Using CrowdLogger for in situ information retrieval system evaluation","authors":"H. Feild, James Allan","doi":"10.1145/2513150.2513164","DOIUrl":"https://doi.org/10.1145/2513150.2513164","url":null,"abstract":"A major hurdle faced by many information retrieval researchers---especially in academia---is evaluating retrieval systems in the wild. Challenges include tapping into large user bases, collecting user behavior, and modifying a given retrieval system. We outline several options available to researchers to overcome these challenges along with their advantages and disadvantages. We then demonstrate how CrowdLogger, an open-source browser extension for Firefox and Google Chrome, can be used as an in situ evaluation platform.","PeriodicalId":436800,"journal":{"name":"LivingLab '13","volume":"2000 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121074343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Online metrics for web search relevance 网络搜索相关性的在线指标
Pub Date : 2013-11-01 DOI: 10.1145/2513150.2513165
Jan O. Pedersen
Information Retrieval has a long tradition of being metrics driven. Ranking algorithms are assessed with respect to some utility measure that reflects the likelihood of satisfying an information need. Traditionally these metrics are based on offline judgments. This is very flexible since judgments can be made for any desired output. However, judgments are no better than judgment guidelines and are at some distance from the actual user experience. Modern Web Search engines enjoy an additional resource; existing web search traffic and its attendant wealth of user engagement data. Primarily this signal consists of logged queries and user actions, including clicks and reformulations. I will discuss how this data can be used to derive Web Search quality metrics that have very different properties than traditional offline metrics.
信息检索具有度量驱动的悠久传统。排名算法是根据反映满足信息需求的可能性的一些效用度量来评估的。传统上,这些指标是基于离线判断。这是非常灵活的,因为可以对任何期望的输出做出判断。然而,判断并不比判断指南更好,并且与实际用户体验有一定距离。现代网络搜索引擎享有额外的资源;现有的网络搜索流量和随之而来的丰富的用户参与数据。这个信号主要由记录的查询和用户操作组成,包括点击和重新格式化。我将讨论如何使用这些数据来派生与传统离线度量具有非常不同属性的Web搜索质量度量。
{"title":"Online metrics for web search relevance","authors":"Jan O. Pedersen","doi":"10.1145/2513150.2513165","DOIUrl":"https://doi.org/10.1145/2513150.2513165","url":null,"abstract":"Information Retrieval has a long tradition of being metrics driven. Ranking algorithms are assessed with respect to some utility measure that reflects the likelihood of satisfying an information need. Traditionally these metrics are based on offline judgments. This is very flexible since judgments can be made for any desired output. However, judgments are no better than judgment guidelines and are at some distance from the actual user experience. Modern Web Search engines enjoy an additional resource; existing web search traffic and its attendant wealth of user engagement data. Primarily this signal consists of logged queries and user actions, including clicks and reformulations. I will discuss how this data can be used to derive Web Search quality metrics that have very different properties than traditional offline metrics.","PeriodicalId":436800,"journal":{"name":"LivingLab '13","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123606145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A private living lab for requirements based evaluation 一个基于需求评估的私人生活实验室
Pub Date : 2013-11-01 DOI: 10.1145/2513150.2513158
Christian Beutenmüller, Stefan Bordag, Ramin Assadollahi
A "Living Lab" is described as an open innovation space for the cooperation of users, researchers and even companies to participate in a common process to develop innovative solutions. An architecture for a living lab for IR has been proposed in [1]. In this paper we propose a method and system that foregoes the inherent openness of the living lab and implements a private living lab to enable the cooperation between a research department, agile software development, and requirements engineering/quality assurance. This allows the research department to overcome the limitations of the usual approach to gold standard-based evaluation, while preserving its positive aspects. The definition of a private living lab may be seen along the lines of the separation of public and private clouds in cloud computing.
“生活实验室”被描述为一个开放的创新空间,供用户、研究人员甚至公司合作,共同参与开发创新解决方案的过程。[1]中提出了一种用于红外成像的活体实验室架构。在本文中,我们提出了一种方法和系统,它放弃了生活实验室固有的开放性,实现了一个私人的生活实验室,以实现研究部门、敏捷软件开发和需求工程/质量保证之间的合作。这使研究部门能够克服基于金标准的通常评估方法的局限性,同时保留其积极的方面。私有生活实验室的定义可以按照云计算中公共云和私有云的分离来看待。
{"title":"A private living lab for requirements based evaluation","authors":"Christian Beutenmüller, Stefan Bordag, Ramin Assadollahi","doi":"10.1145/2513150.2513158","DOIUrl":"https://doi.org/10.1145/2513150.2513158","url":null,"abstract":"A \"Living Lab\" is described as an open innovation space for the cooperation of users, researchers and even companies to participate in a common process to develop innovative solutions. An architecture for a living lab for IR has been proposed in [1]. In this paper we propose a method and system that foregoes the inherent openness of the living lab and implements a private living lab to enable the cooperation between a research department, agile software development, and requirements engineering/quality assurance.\u0000 This allows the research department to overcome the limitations of the usual approach to gold standard-based evaluation, while preserving its positive aspects. The definition of a private living lab may be seen along the lines of the separation of public and private clouds in cloud computing.","PeriodicalId":436800,"journal":{"name":"LivingLab '13","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132753030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Factors affecting conditions of trust in participant recruiting and retention: a position paper 影响参与者招募和保留信任条件的因素:立场文件
Pub Date : 2013-11-01 DOI: 10.1145/2513150.2513161
Catherine L. Smith
This paper contemplates some of the challenges faced in recruiting and developing a community of contributors (participants/subjects) for a living laboratory for IR evaluation. We briefly review several factors that may affect the efficacy of participant recruiting. The potential benefits of collaboration with respect to recruiting are also discussed briefly.
本文考虑了在为IR评估的活实验室招募和发展贡献者(参与者/受试者)社区时面临的一些挑战。我们简要回顾了可能影响参与者招募效果的几个因素。还简要讨论了在招聘方面合作的潜在好处。
{"title":"Factors affecting conditions of trust in participant recruiting and retention: a position paper","authors":"Catherine L. Smith","doi":"10.1145/2513150.2513161","DOIUrl":"https://doi.org/10.1145/2513150.2513161","url":null,"abstract":"This paper contemplates some of the challenges faced in recruiting and developing a community of contributors (participants/subjects) for a living laboratory for IR evaluation. We briefly review several factors that may affect the efficacy of participant recruiting. The potential benefits of collaboration with respect to recruiting are also discussed briefly.","PeriodicalId":436800,"journal":{"name":"LivingLab '13","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127231868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Lerot: an online learning to rank framework Lerot:一个在线学习排名框架
Pub Date : 2013-11-01 DOI: 10.1145/2513150.2513162
Anne Schuth, Katja Hofmann, Shimon Whiteson, M. de Rijke
Online learning to rank methods for IR allow retrieval systems to optimize their own performance directly from interactions with users via click feedback. In the software package Lerot, presented in this paper, we have bundled all ingredients needed for experimenting with online learning to rank for IR. Lerot includes several online learning algorithms, interleaving methods and a full suite of ways to evaluate these methods. In the absence of real users, the evaluation method bundled in the software package is based on simulations of users interacting with the search engine. The software presented here has been used to verify findings of over six papers at major information retrieval venues over the last few years.
在线学习对IR方法进行排名,允许检索系统通过点击反馈直接从与用户的交互中优化自己的性能。在本文中介绍的Lerot软件包中,我们将在线学习实验所需的所有成分捆绑在一起,以对IR进行排名。Lerot包括几种在线学习算法,交错方法和一整套评估这些方法的方法。在没有真实用户的情况下,软件包中捆绑的评估方法是基于用户与搜索引擎交互的模拟。在过去的几年中,这里介绍的软件已用于在主要信息检索场所验证超过六篇论文的发现。
{"title":"Lerot: an online learning to rank framework","authors":"Anne Schuth, Katja Hofmann, Shimon Whiteson, M. de Rijke","doi":"10.1145/2513150.2513162","DOIUrl":"https://doi.org/10.1145/2513150.2513162","url":null,"abstract":"Online learning to rank methods for IR allow retrieval systems to optimize their own performance directly from interactions with users via click feedback. In the software package Lerot, presented in this paper, we have bundled all ingredients needed for experimenting with online learning to rank for IR. Lerot includes several online learning algorithms, interleaving methods and a full suite of ways to evaluate these methods. In the absence of real users, the evaluation method bundled in the software package is based on simulations of users interacting with the search engine. The software presented here has been used to verify findings of over six papers at major information retrieval venues over the last few years.","PeriodicalId":436800,"journal":{"name":"LivingLab '13","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123635767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
FindiLike: a preference driven entity search engine for evaluating entity retrieval and opinion summarization FindiLike:一个偏好驱动的实体搜索引擎,用于评估实体检索和意见总结
Pub Date : 2013-11-01 DOI: 10.1145/2513150.2513163
Kavita A. Ganesan, ChengXiang Zhai
We describe a novel preference-driven search engine (FindiLike) which allows users to find entities of interest based on preferences and also allows users to digest opinions about the retrieved entities easily. FindiLike leverages large amounts of online reviews about various entities, and ranks entities based on how well their associated reviews match a user's preference query (expressed in keywords). FindiLike then uses abstractive summarization techniques to generate concise opinion summaries to enable users to digest the opinions about an entity. We discuss how the system can be extended to support in situ evaluation of two interesting new tasks, i.e., opinion-based entity ranking and abstractive summarization of opinions. The system is currently supporting hotel search and being extended to support in situ evaluation of these two tasks. We will demonstrate the system in the domain of hotel search and show how in situ evaluation can be supported through natural user interaction with the system.
我们描述了一种新的偏好驱动搜索引擎(FindiLike),它允许用户根据偏好找到感兴趣的实体,也允许用户轻松地消化关于检索实体的意见。FindiLike利用大量关于各种实体的在线评论,并根据其相关评论与用户偏好查询(以关键字表示)的匹配程度对实体进行排名。然后FindiLike使用抽象摘要技术生成简明的意见摘要,使用户能够理解关于实体的意见。我们讨论了如何将系统扩展到支持两个有趣的新任务的现场评估,即基于意见的实体排序和意见的抽象摘要。该系统目前正在支持酒店搜索,并正在扩展到支持对这两项任务的现场评价。我们将在酒店搜索领域演示该系统,并展示如何通过与系统的自然用户交互来支持现场评估。
{"title":"FindiLike: a preference driven entity search engine for evaluating entity retrieval and opinion summarization","authors":"Kavita A. Ganesan, ChengXiang Zhai","doi":"10.1145/2513150.2513163","DOIUrl":"https://doi.org/10.1145/2513150.2513163","url":null,"abstract":"We describe a novel preference-driven search engine (FindiLike) which allows users to find entities of interest based on preferences and also allows users to digest opinions about the retrieved entities easily. FindiLike leverages large amounts of online reviews about various entities, and ranks entities based on how well their associated reviews match a user's preference query (expressed in keywords). FindiLike then uses abstractive summarization techniques to generate concise opinion summaries to enable users to digest the opinions about an entity. We discuss how the system can be extended to support in situ evaluation of two interesting new tasks, i.e., opinion-based entity ranking and abstractive summarization of opinions. The system is currently supporting hotel search and being extended to support in situ evaluation of these two tasks. We will demonstrate the system in the domain of hotel search and show how in situ evaluation can be supported through natural user interaction with the system.","PeriodicalId":436800,"journal":{"name":"LivingLab '13","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132213291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
LivingLab '13
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1