首页 > 最新文献

Proceedings of the 2016 ACM Workshop on Multimedia COMMONS最新文献

英文 中文
YFCC100M HybridNet fc6 Deep Features for Content-Based Image Retrieval YFCC100M HybridNet fc6基于内容的图像检索深度特征
Pub Date : 2016-10-16 DOI: 10.1145/2983554.2983557
Giuseppe Amato, F. Falchi, C. Gennaro, F. Rabitti
This paper presents a corpus of deep features extracted from the YFCC100M images considering the fc6 hidden layer activation of the HybridNet deep convolutional neural network. For a set of random selected queries we made available k-NN results obtained sequentially scanning the entire set features comparing both using the Euclidean and Hamming Distance on a binarized version of the features. This set of results is ground truth for evaluating Content-Based Image Retrieval (CBIR) systems that use approximate similarity search methods for efficient and scalable indexing. Moreover, we present experimental results obtained indexing this corpus with two distinct approaches: the Metric Inverted File and the Lucene Quantization. These two CBIR systems are public available online allowing real-time search using both internal and external queries.
考虑HybridNet深度卷积神经网络的fc6隐藏层激活,本文提出了从YFCC100M图像中提取的深度特征语料库。对于一组随机选择的查询,我们使用k-NN结果对整个特征集进行顺序扫描,并在特征的二值化版本上使用欧氏距离和汉明距离进行比较。这组结果是评估基于内容的图像检索(CBIR)系统的基本事实,该系统使用近似相似搜索方法进行高效和可扩展的索引。此外,我们给出了用两种不同的方法对语料库进行索引的实验结果:度量倒档和Lucene量化。这两个CBIR系统是在线公开的,允许使用内部和外部查询进行实时搜索。
{"title":"YFCC100M HybridNet fc6 Deep Features for Content-Based Image Retrieval","authors":"Giuseppe Amato, F. Falchi, C. Gennaro, F. Rabitti","doi":"10.1145/2983554.2983557","DOIUrl":"https://doi.org/10.1145/2983554.2983557","url":null,"abstract":"This paper presents a corpus of deep features extracted from the YFCC100M images considering the fc6 hidden layer activation of the HybridNet deep convolutional neural network. For a set of random selected queries we made available k-NN results obtained sequentially scanning the entire set features comparing both using the Euclidean and Hamming Distance on a binarized version of the features. This set of results is ground truth for evaluating Content-Based Image Retrieval (CBIR) systems that use approximate similarity search methods for efficient and scalable indexing. Moreover, we present experimental results obtained indexing this corpus with two distinct approaches: the Metric Inverted File and the Lucene Quantization. These two CBIR systems are public available online allowing real-time search using both internal and external queries.","PeriodicalId":340803,"journal":{"name":"Proceedings of the 2016 ACM Workshop on Multimedia COMMONS","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124317936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Developing Benchmarks: The Importance of the Process and New Paradigms 开发基准:过程和新范式的重要性
Pub Date : 2016-10-16 DOI: 10.1145/2983554.2983562
R. Ordelman
The value and importance of Benchmark Evaluations is widely acknowledged. Benchmarks play a key role in many research projects. It takes time, a well-balanced team of domain specialists preferably with links to the user community and industry, and a strong involvement of the research community itself to establish a sound evaluation framework that includes (annotated) data sets, well-defined tasks that reflect the needs in the 'real world', a proper evaluation methodology, ground-truth, including a strategy for repetitive assessments, and last but not least, funding. Although the benefits of an evaluation framework are typically reviewed from a perspective of 'research output' --e.g., a scientific publication demonstrating an advance of a certain methodology-- it is important to be aware of the value of the process of creating a benchmark itself: it increases significantly the understanding of the problem we want to address and as a consequence also the impact of the evaluation outcomes. In this talk I will overview the history of a series of tasks focusing on audiovisual search emphasizing its 'multimodal' aspects, starting in 2006 with the workshop on 'Searching Spontaneous Conversational Speech' that led to tasks in CLEF and MediaEval ("Search and Hyperlinking"), and recently also TRECVid ("Video Hyperlinking"). The focus of my talk will be on the process rather than on the results of these evaluations themselves, and will address cross-benchmark connections, and new benchmark paradigms, specifically the integration of benchmarking in industrial 'living labs' that are becoming popular in some domains.
基准评估的价值和重要性已得到广泛认可。基准在许多研究项目中发挥着关键作用。建立一个健全的评估框架需要时间,一个由领域专家组成的平衡良好的团队(最好与用户社区和行业有联系),以及研究界本身的强烈参与,该框架包括(注释的)数据集,反映“现实世界”需求的定义良好的任务,适当的评估方法,基本事实,包括重复评估的策略,最后但并非最不重要的是,资金。虽然评估框架的好处通常是从“研究产出”的角度来评估的——例如,科学出版物展示了某种方法的进步——但重要的是要意识到创建基准过程本身的价值:它显著地增加了对我们想要解决的问题的理解,因此也增加了评估结果的影响。在这次演讲中,我将概述一系列专注于视听搜索的任务的历史,强调其“多模态”方面,从2006年的“搜索自发对话语音”研讨会开始,该研讨会导致了CLEF和MediaEval(“搜索和超链接”)的任务,以及最近的TRECVid(“视频超链接”)。我演讲的重点将放在过程上,而不是这些评估本身的结果上,并将讨论跨基准连接和新的基准范例,特别是在某些领域变得流行的工业“生活实验室”中的基准整合。
{"title":"Developing Benchmarks: The Importance of the Process and New Paradigms","authors":"R. Ordelman","doi":"10.1145/2983554.2983562","DOIUrl":"https://doi.org/10.1145/2983554.2983562","url":null,"abstract":"The value and importance of Benchmark Evaluations is widely acknowledged. Benchmarks play a key role in many research projects. It takes time, a well-balanced team of domain specialists preferably with links to the user community and industry, and a strong involvement of the research community itself to establish a sound evaluation framework that includes (annotated) data sets, well-defined tasks that reflect the needs in the 'real world', a proper evaluation methodology, ground-truth, including a strategy for repetitive assessments, and last but not least, funding. Although the benefits of an evaluation framework are typically reviewed from a perspective of 'research output' --e.g., a scientific publication demonstrating an advance of a certain methodology-- it is important to be aware of the value of the process of creating a benchmark itself: it increases significantly the understanding of the problem we want to address and as a consequence also the impact of the evaluation outcomes. In this talk I will overview the history of a series of tasks focusing on audiovisual search emphasizing its 'multimodal' aspects, starting in 2006 with the workshop on 'Searching Spontaneous Conversational Speech' that led to tasks in CLEF and MediaEval (\"Search and Hyperlinking\"), and recently also TRECVid (\"Video Hyperlinking\"). The focus of my talk will be on the process rather than on the results of these evaluations themselves, and will address cross-benchmark connections, and new benchmark paradigms, specifically the integration of benchmarking in industrial 'living labs' that are becoming popular in some domains.","PeriodicalId":340803,"journal":{"name":"Proceedings of the 2016 ACM Workshop on Multimedia COMMONS","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128387214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-depth Exploration of Geotagging Performance using Sampling Strategies on YFCC100M YFCC100M上基于采样策略的地理标记性能深入探索
Pub Date : 2016-10-16 DOI: 10.1145/2983554.2983558
Giorgos Kordopatis-Zilos, S. Papadopoulos, Y. Kompatsiaris
Evaluating multimedia analysis and retrieval systems is a highly challenging task, of which the outcomes can be highly volatile depending on the selected test collection. In this paper, we focus on the problem of multimedia geotagging, i.e. estimating the geographical location of a media item based on its content and metadata, in order to showcase that very different evaluation outcomes may be obtained depending on the test collection at hand. To alleviate this problem, we propose an evaluation methodology based on an array of sampling strategies over a reference test collection, and a way of quantifying and summarizing the volatility of performance measurements. We report experimental results on the MediaEval 2015 Placing Task dataset, and demonstrate that the proposed methodology could help capture the performance of geotagging systems in a comprehensive manner that is complementary to existing evaluation approaches.
评估多媒体分析和检索系统是一项极具挑战性的任务,其结果可能高度不稳定,取决于所选择的测试集合。在本文中,我们专注于多媒体地理标记的问题,即根据其内容和元数据估计媒体项目的地理位置,以展示根据手头的测试集合可能获得截然不同的评估结果。为了缓解这一问题,我们提出了一种基于参考测试集合的一系列采样策略的评估方法,以及一种量化和总结性能测量波动性的方法。我们报告了在MediaEval 2015放置任务数据集上的实验结果,并证明了所提出的方法可以帮助以一种全面的方式捕获地理标记系统的性能,这是对现有评估方法的补充。
{"title":"In-depth Exploration of Geotagging Performance using Sampling Strategies on YFCC100M","authors":"Giorgos Kordopatis-Zilos, S. Papadopoulos, Y. Kompatsiaris","doi":"10.1145/2983554.2983558","DOIUrl":"https://doi.org/10.1145/2983554.2983558","url":null,"abstract":"Evaluating multimedia analysis and retrieval systems is a highly challenging task, of which the outcomes can be highly volatile depending on the selected test collection. In this paper, we focus on the problem of multimedia geotagging, i.e. estimating the geographical location of a media item based on its content and metadata, in order to showcase that very different evaluation outcomes may be obtained depending on the test collection at hand. To alleviate this problem, we propose an evaluation methodology based on an array of sampling strategies over a reference test collection, and a way of quantifying and summarizing the volatility of performance measurements. We report experimental results on the MediaEval 2015 Placing Task dataset, and demonstrate that the proposed methodology could help capture the performance of geotagging systems in a comprehensive manner that is complementary to existing evaluation approaches.","PeriodicalId":340803,"journal":{"name":"Proceedings of the 2016 ACM Workshop on Multimedia COMMONS","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128452802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Concept-Level Multimodal Ranking of Flickr Photo Tags via Recall Based Weighting 基于召回权的Flickr照片标签概念级多模态排序
Pub Date : 2016-10-16 DOI: 10.1145/2983554.2983555
R. Shah, Yi Yu, Suhua Tang, S. Satoh, Akshay Verma, Roger Zimmermann
Social media platforms allow users to annotate photos with tags that significantly facilitate an effective semantics understanding, search, and retrieval of photos. However, due to the manual, ambiguous, and personalized nature of user tagging, many tags of a photo are in a random order and even irrelevant to the visual content. Aiming to automatically compute tag relevance for a given photo, we propose a tag ranking scheme based on voting from photo neighbors derived from multimodal information. Specifically, we determine photo neighbors leveraging geo, visual, and semantics concepts derived from spatial information, visual content, and textual metadata, respectively. We leverage high-level features instead traditional low-level features to compute tag relevance. Experimental results on a representative set of 203,840 photos from the YFCC100M dataset confirm that above-mentioned multimodal concepts complement each other in computing tag relevance. Moreover, we explore the fusion of multimodal information to refine tag ranking leveraging recall based weighting. Experimental results on the representative set confirm that the proposed algorithm outperforms state-of-the-arts.
社交媒体平台允许用户用标签对照片进行注释,这极大地促进了对照片的有效语义理解、搜索和检索。然而,由于用户标记的手工性、模糊性和个性化,许多照片的标记顺序是随机的,甚至与视觉内容无关。为了自动计算给定照片的标签相关性,提出了一种基于多模态信息衍生的照片邻居投票的标签排序方案。具体来说,我们分别利用空间信息、视觉内容和文本元数据衍生的地理、视觉和语义概念来确定照片邻居。我们利用高级特征代替传统的低级特征来计算标签相关性。在YFCC100M数据集203840张具有代表性的照片上的实验结果证实,上述多模态概念在计算标签相关性方面是相互补充的。此外,我们还探索了多模态信息的融合,以利用基于召回的加权来优化标签排名。在代表性集上的实验结果证实了该算法优于目前最先进的算法。
{"title":"Concept-Level Multimodal Ranking of Flickr Photo Tags via Recall Based Weighting","authors":"R. Shah, Yi Yu, Suhua Tang, S. Satoh, Akshay Verma, Roger Zimmermann","doi":"10.1145/2983554.2983555","DOIUrl":"https://doi.org/10.1145/2983554.2983555","url":null,"abstract":"Social media platforms allow users to annotate photos with tags that significantly facilitate an effective semantics understanding, search, and retrieval of photos. However, due to the manual, ambiguous, and personalized nature of user tagging, many tags of a photo are in a random order and even irrelevant to the visual content. Aiming to automatically compute tag relevance for a given photo, we propose a tag ranking scheme based on voting from photo neighbors derived from multimodal information. Specifically, we determine photo neighbors leveraging geo, visual, and semantics concepts derived from spatial information, visual content, and textual metadata, respectively. We leverage high-level features instead traditional low-level features to compute tag relevance. Experimental results on a representative set of 203,840 photos from the YFCC100M dataset confirm that above-mentioned multimodal concepts complement each other in computing tag relevance. Moreover, we explore the fusion of multimodal information to refine tag ranking leveraging recall based weighting. Experimental results on the representative set confirm that the proposed algorithm outperforms state-of-the-arts.","PeriodicalId":340803,"journal":{"name":"Proceedings of the 2016 ACM Workshop on Multimedia COMMONS","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130468562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Which Languages do People Speak on Flickr?: A Language and Geo-Location Study of the YFCC100m Dataset 人们在Flickr上说哪些语言?: YFCC100m数据集的语言与地理定位研究
Pub Date : 2016-10-16 DOI: 10.1145/2983554.2983560
Alireza Koochali, Sebastian Kalkowski, A. Dengel, Damian Borth, Christian Schulze
Recently, the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset was introduced to the computer vision and multimedia research community. This dataset consists of millions of images and videos spread over the globe. This geo-distribution hints at a potentially large set of different languages being used in titles, descriptions, and tags of these images and videos. Since the YFCC100m metadata does not provide any information about the languages used in the dataset, this paper presents the first analysis of this kind. The language and geo-location characteristics of the YFCC100m dataset is described by providing (a) an overview of used languages, (b) language to country associations, and (c) second language usage in this dataset. Being able to know the language spoken in titles, descriptions, and tags, users of the dataset can make language specific decisions to select subsets of images for, e.g., proper training of classifiers or analyze user behavior specific to their spoken language. Also, this language information is essential for further linguistic studies on the metadata of the YFCC100m dataset.
最近,Yahoo Flickr Creative Commons 1亿(YFCC100m)数据集被引入计算机视觉和多媒体研究社区。这个数据集由遍布全球的数百万张图片和视频组成。这种地理分布暗示着在这些图像和视频的标题、描述和标签中可能会使用大量不同的语言。由于YFCC100m元数据没有提供数据集中使用的语言的任何信息,因此本文首次对此进行了分析。YFCC100m数据集的语言和地理位置特征通过提供(a)使用语言的概述,(b)向国家协会提供的语言,以及(c)该数据集中的第二语言使用情况来描述。能够知道标题、描述和标签中使用的语言,数据集的用户可以做出特定于语言的决策,以选择图像子集,例如,对分类器进行适当的训练,或分析特定于其口语的用户行为。此外,这些语言信息对于YFCC100m数据集元数据的进一步语言学研究至关重要。
{"title":"Which Languages do People Speak on Flickr?: A Language and Geo-Location Study of the YFCC100m Dataset","authors":"Alireza Koochali, Sebastian Kalkowski, A. Dengel, Damian Borth, Christian Schulze","doi":"10.1145/2983554.2983560","DOIUrl":"https://doi.org/10.1145/2983554.2983560","url":null,"abstract":"Recently, the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset was introduced to the computer vision and multimedia research community. This dataset consists of millions of images and videos spread over the globe. This geo-distribution hints at a potentially large set of different languages being used in titles, descriptions, and tags of these images and videos. Since the YFCC100m metadata does not provide any information about the languages used in the dataset, this paper presents the first analysis of this kind. The language and geo-location characteristics of the YFCC100m dataset is described by providing (a) an overview of used languages, (b) language to country associations, and (c) second language usage in this dataset. Being able to know the language spoken in titles, descriptions, and tags, users of the dataset can make language specific decisions to select subsets of images for, e.g., proper training of classifiers or analyze user behavior specific to their spoken language. Also, this language information is essential for further linguistic studies on the metadata of the YFCC100m dataset.","PeriodicalId":340803,"journal":{"name":"Proceedings of the 2016 ACM Workshop on Multimedia COMMONS","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Analysis of Spatial, Temporal, and Content Characteristics of Videos in the YFCC100M Dataset YFCC100M数据集视频时空及内容特征分析
Pub Date : 2016-10-16 DOI: 10.1145/2983554.2983559
Jun-Ho Choi, Jong-Seok Lee
The Yahoo Flickr Creative Commons 100 Million dataset (YFCC100M) is one of the largest public databases containing images and videos and their annotations for research on multimedia analysis. In this paper, we present our study on analysis of characteristics of the 0.8 million videos in the dataset in spatial, temporal, and content perspectives. For this, all the video frames and metadata of the videos are examined. In addition, user-wise analysis of the characteristics is conducted. We make the obtained results publicly available in the form of a metadata dataset for the research community.
雅虎Flickr知识共享1亿数据集(YFCC100M)是最大的公共数据库之一,包含用于多媒体分析研究的图像和视频及其注释。在本文中,我们从空间、时间和内容的角度对数据集中的80万个视频进行了特征分析。为此,对视频的所有视频帧和元数据进行检查。此外,还对这些特性进行了用户分析。我们将获得的结果以元数据集的形式公开提供给研究界。
{"title":"Analysis of Spatial, Temporal, and Content Characteristics of Videos in the YFCC100M Dataset","authors":"Jun-Ho Choi, Jong-Seok Lee","doi":"10.1145/2983554.2983559","DOIUrl":"https://doi.org/10.1145/2983554.2983559","url":null,"abstract":"The Yahoo Flickr Creative Commons 100 Million dataset (YFCC100M) is one of the largest public databases containing images and videos and their annotations for research on multimedia analysis. In this paper, we present our study on analysis of characteristics of the 0.8 million videos in the dataset in spatial, temporal, and content perspectives. For this, all the video frames and metadata of the videos are examined. In addition, user-wise analysis of the characteristics is conducted. We make the obtained results publicly available in the form of a metadata dataset for the research community.","PeriodicalId":340803,"journal":{"name":"Proceedings of the 2016 ACM Workshop on Multimedia COMMONS","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123526933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Proceedings of the 2016 ACM Workshop on Multimedia COMMONS 2016年ACM多媒体共享研讨会论文集
Pub Date : 1900-01-01 DOI: 10.1145/2983554
{"title":"Proceedings of the 2016 ACM Workshop on Multimedia COMMONS","authors":"","doi":"10.1145/2983554","DOIUrl":"https://doi.org/10.1145/2983554","url":null,"abstract":"","PeriodicalId":340803,"journal":{"name":"Proceedings of the 2016 ACM Workshop on Multimedia COMMONS","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128315188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 2016 ACM Workshop on Multimedia COMMONS
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1