首页 > 最新文献

Proceedings of the 21st ACM international conference on Multimedia最新文献

英文 中文
Classifying tag relevance with relevant positive and negative examples 用相关的正反例对标签相关性进行分类
Pub Date : 2013-10-21 DOI: 10.1145/2502081.2502129
Xirong Li, Cees G. M. Snoek
Image tag relevance estimation aims to automatically determine what people label about images is factually present in the pictorial content. Different from previous works, which either use only positive examples of a given tag or use positive and random negative examples, we argue the importance of relevant positive and relevant negative examples for tag relevance estimation. We propose a system that selects positive and negative examples, deemed most relevant with respect to the given tag from crowd-annotated images. While applying models for many tags could be cumbersome, our system trains efficient ensembles of Support Vector Machines per tag, enabling fast classification. Experiments on two benchmark sets show that the proposed system compares favorably against five present day methods. Given extracted visual features, for each image our system can process up to 3,787 tags per second. The new system is both effective and efficient for tag relevance estimation.
图像标签相关性估计的目的是自动确定人们对图像所标记的内容是否真实存在于图像内容中。与以往只使用给定标签的正例或使用正例和随机负例不同,我们认为相关的正例和相关的负例对于标签相关性估计的重要性。我们提出了一个系统,从人群注释图像中选择被认为与给定标签最相关的正面和负面示例。虽然对许多标签应用模型可能很麻烦,但我们的系统可以为每个标签训练有效的支持向量机集合,从而实现快速分类。在两个基准集上的实验表明,该系统优于现有的五种方法。给定提取的视觉特征,对于每张图像,我们的系统每秒可以处理多达3,787个标签。该系统对标签相关性的估计是有效的。
{"title":"Classifying tag relevance with relevant positive and negative examples","authors":"Xirong Li, Cees G. M. Snoek","doi":"10.1145/2502081.2502129","DOIUrl":"https://doi.org/10.1145/2502081.2502129","url":null,"abstract":"Image tag relevance estimation aims to automatically determine what people label about images is factually present in the pictorial content. Different from previous works, which either use only positive examples of a given tag or use positive and random negative examples, we argue the importance of relevant positive and relevant negative examples for tag relevance estimation. We propose a system that selects positive and negative examples, deemed most relevant with respect to the given tag from crowd-annotated images. While applying models for many tags could be cumbersome, our system trains efficient ensembles of Support Vector Machines per tag, enabling fast classification. Experiments on two benchmark sets show that the proposed system compares favorably against five present day methods. Given extracted visual features, for each image our system can process up to 3,787 tags per second. The new system is both effective and efficient for tag relevance estimation.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80336799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Lecture video segmentation by automatically analyzing the synchronized slides 讲座视频分割,自动分析同步幻灯片
Pub Date : 2013-10-21 DOI: 10.1145/2502081.2508115
Xiaoyin Che, Haojin Yang, C. Meinel
In this paper we propose a solution which segments lecture video by analyzing its supplementary synchronized slides. The slides content derives automatically from OCR (Optical Character Recognition) process with an approximate accuracy of 90%. Then we partition the slides into different subtopics by examining their logical relevance. Since the slides are synchronized with the video stream, the subtopics of the slides indicate exactly the segments of the video. Our evaluation reveals that the average length of segments for each lecture is ranged from 5 to 15 minutes, and 45% segments achieved from test datasets are logically reasonable.
本文提出了一种通过分析讲座视频的补充同步幻灯片来分割讲座视频的解决方案。幻灯片内容自动从OCR(光学字符识别)过程中提取,准确率约为90%。然后,我们通过检查它们的逻辑相关性将幻灯片划分为不同的子主题。由于幻灯片与视频流是同步的,因此幻灯片的子主题精确地表示视频的片段。我们的评估显示,每个讲座的平均片段长度在5到15分钟之间,从测试数据集获得的45%的片段在逻辑上是合理的。
{"title":"Lecture video segmentation by automatically analyzing the synchronized slides","authors":"Xiaoyin Che, Haojin Yang, C. Meinel","doi":"10.1145/2502081.2508115","DOIUrl":"https://doi.org/10.1145/2502081.2508115","url":null,"abstract":"In this paper we propose a solution which segments lecture video by analyzing its supplementary synchronized slides. The slides content derives automatically from OCR (Optical Character Recognition) process with an approximate accuracy of 90%. Then we partition the slides into different subtopics by examining their logical relevance. Since the slides are synchronized with the video stream, the subtopics of the slides indicate exactly the segments of the video. Our evaluation reveals that the average length of segments for each lecture is ranged from 5 to 15 minutes, and 45% segments achieved from test datasets are logically reasonable.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77669029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Using quadratic programming to estimate feature relevance in structural analyses of music 用二次规划估计音乐结构分析中的特征相关性
Pub Date : 2013-10-21 DOI: 10.1145/2502081.2502124
Jordan B. L. Smith, E. Chew
To identify repeated patterns and contrasting sections in music, it is common to use self-similarity matrices (SSMs) to visualize and estimate structure. We introduce a novel application for SSMs derived from audio recordings: using them to learn about the potential reasoning behind a listener's annotation. We use SSMs generated by musically-motivated audio features at various timescales to represent contributions to a structural annotation. Since a listener's attention can shift among musical features (e.g., rhythm, timbre, and harmony) throughout a piece, we further break down the SSMs into section-wise components and use quadratic programming (QP) to minimize the distance between a linear sum of these components and the annotated description. We posit that the optimal section-wise weights on the feature components may indicate the features to which a listener attended when annotating a piece, and thus may help us to understand why two listeners disagreed about a piece's structure. We discuss some examples that substantiate the claim that feature relevance varies throughout a piece, using our method to investigate differences between listeners' interpretations, and lastly propose some variations on our method.
为了识别音乐中的重复模式和对比部分,通常使用自相似矩阵(ssm)来可视化和估计结构。我们介绍了一种来自录音的ssm的新应用:使用它们来了解听者注释背后的潜在推理。我们使用由音乐驱动的音频特征在不同时间尺度上生成的ssm来表示对结构注释的贡献。由于听众的注意力可以在整个作品的音乐特征(如节奏、音色和和声)之间转移,我们进一步将ssm分解为分段组件,并使用二次规划(QP)最小化这些组件的线性总和与注释描述之间的距离。我们假设特征组件上的最佳分段明智权重可以指示听众在注释一首乐曲时所关注的特征,从而可以帮助我们理解为什么两个听众对一首乐曲的结构意见不一致。我们讨论了一些例子,证实了特征相关性在整个作品中有所不同的说法,使用我们的方法来调查听众之间解释的差异,最后提出了我们方法的一些变化。
{"title":"Using quadratic programming to estimate feature relevance in structural analyses of music","authors":"Jordan B. L. Smith, E. Chew","doi":"10.1145/2502081.2502124","DOIUrl":"https://doi.org/10.1145/2502081.2502124","url":null,"abstract":"To identify repeated patterns and contrasting sections in music, it is common to use self-similarity matrices (SSMs) to visualize and estimate structure. We introduce a novel application for SSMs derived from audio recordings: using them to learn about the potential reasoning behind a listener's annotation. We use SSMs generated by musically-motivated audio features at various timescales to represent contributions to a structural annotation. Since a listener's attention can shift among musical features (e.g., rhythm, timbre, and harmony) throughout a piece, we further break down the SSMs into section-wise components and use quadratic programming (QP) to minimize the distance between a linear sum of these components and the annotated description. We posit that the optimal section-wise weights on the feature components may indicate the features to which a listener attended when annotating a piece, and thus may help us to understand why two listeners disagreed about a piece's structure. We discuss some examples that substantiate the claim that feature relevance varies throughout a piece, using our method to investigate differences between listeners' interpretations, and lastly propose some variations on our method.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"365 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83020444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
GLocal structural feature selection with sparsity for multimedia data understanding 基于稀疏度的多媒体数据局部结构特征选择
Pub Date : 2013-10-21 DOI: 10.1145/2502081.2502142
Yan Yan, Zhongwen Xu, Gaowen Liu, Zhigang Ma, N. Sebe
The selection of discriminative features is an important and effective technique for many multimedia tasks. Using irrelevant features in classification or clustering tasks could deteriorate the performance. Thus, designing efficient feature selection algorithms to remove the irrelevant features is a possible way to improve the classification or clustering performance. With the successful usage of sparse models in image and video classification and understanding, imposing structural sparsity in emph{feature selection} has been widely investigated during the past years. Motivated by the merit of sparse models, we propose a novel feature selection method using a sparse model in this paper. Different from the state of the art, our method is built upon $ell _{2,p}$-norm and simultaneously considers both the global and local (GLocal) structures of data distribution. Our method is more flexible in selecting the discriminating features as it is able to control the degree of sparseness. Moreover, considering both global and local structures of data distribution makes our feature selection process more effective. An efficient algorithm is proposed to solve the $ell_{2,p}$-norm sparsity optimization problem in this paper. Experimental results performed on real-world image and video datasets show the effectiveness of our feature selection method compared to several state-of-the-art methods.
鉴别特征的选择是许多多媒体任务的一项重要而有效的技术。在分类或聚类任务中使用不相关的特征可能会降低性能。因此,设计有效的特征选择算法来去除不相关的特征是提高分类或聚类性能的可能途径。随着稀疏模型在图像和视频分类和理解中的成功应用,在emph{特征选择}中施加结构稀疏性得到了广泛的研究。基于稀疏模型的优点,本文提出了一种基于稀疏模型的特征选择方法。与目前的技术状况不同,我们的方法建立在$ell _{2,p}$ -norm之上,同时考虑数据分布的全局和局部(GLocal)结构。由于该方法能够控制稀疏度,因此在选择判别特征方面更加灵活。同时考虑数据分布的全局和局部结构,使得特征选择过程更加有效。本文提出了一种求解$ell_{2,p}$ -范数稀疏性优化问题的有效算法。在真实世界的图像和视频数据集上进行的实验结果表明,与几种最先进的方法相比,我们的特征选择方法是有效的。
{"title":"GLocal structural feature selection with sparsity for multimedia data understanding","authors":"Yan Yan, Zhongwen Xu, Gaowen Liu, Zhigang Ma, N. Sebe","doi":"10.1145/2502081.2502142","DOIUrl":"https://doi.org/10.1145/2502081.2502142","url":null,"abstract":"The selection of discriminative features is an important and effective technique for many multimedia tasks. Using irrelevant features in classification or clustering tasks could deteriorate the performance. Thus, designing efficient feature selection algorithms to remove the irrelevant features is a possible way to improve the classification or clustering performance. With the successful usage of sparse models in image and video classification and understanding, imposing structural sparsity in emph{feature selection} has been widely investigated during the past years. Motivated by the merit of sparse models, we propose a novel feature selection method using a sparse model in this paper. Different from the state of the art, our method is built upon $ell _{2,p}$-norm and simultaneously considers both the global and local (GLocal) structures of data distribution. Our method is more flexible in selecting the discriminating features as it is able to control the degree of sparseness. Moreover, considering both global and local structures of data distribution makes our feature selection process more effective. An efficient algorithm is proposed to solve the $ell_{2,p}$-norm sparsity optimization problem in this paper. Experimental results performed on real-world image and video datasets show the effectiveness of our feature selection method compared to several state-of-the-art methods.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"2013 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89615655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Multimedia information retrieval: music and audio 多媒体信息检索:音乐和音频
Pub Date : 2013-10-21 DOI: 10.1145/2502081.2502237
M. Schedl, E. Gómez, Masataka Goto
Music is an omnipresent topic in our daily lives, as almost everyone enjoys listening to his or her favorite tunes. Music information retrieval (MIR) is a research field that aims – among other things – at automatically extracting semantically meaningful information from various representations of music entities, such as a digital audio file, a band’s web page, a song’s lyrics, or a tweet about a microblogger’s current listening activity. A key approach in MIR is to describe music via computational features, which can be categorized into: music content, music context, and user context. The music content refers to features extracted from the audio signal, while information about musical entities not encoded in the signal (e.g., image of an artist or political background of a song) are referred to as music context. The user context, in contrast, includes environmental aspects as well as physical and mental activities of the music listener. MIR research has been seeing a paradigm shift over the last couple of years, as an increasing number of recent approaches and commercial technologies combine content-based techniques (focusing on the audio signal) with multimedia context data mined, e.g. from web sources and with user context information.
音乐在我们的日常生活中是一个无处不在的话题,因为几乎每个人都喜欢听他或她最喜欢的曲调。音乐信息检索(Music information retrieval, MIR)是一个研究领域,其目的之一是自动从音乐实体的各种表示中提取语义上有意义的信息,例如数字音频文件、乐队的网页、歌曲的歌词,或者关于微博用户当前收听活动的tweet。MIR中的一个关键方法是通过计算特征来描述音乐,这些特征可以分为:音乐内容、音乐上下文和用户上下文。音乐内容是指从音频信号中提取的特征,而未在信号中编码的关于音乐实体的信息(例如,艺术家的图像或歌曲的政治背景)称为音乐上下文。相比之下,用户环境包括环境方面以及音乐听众的身体和心理活动。在过去的几年里,随着越来越多的最新方法和商业技术将基于内容的技术(专注于音频信号)与多媒体上下文数据(例如,从web来源和用户上下文信息)相结合,MIR研究已经看到了范式的转变。
{"title":"Multimedia information retrieval: music and audio","authors":"M. Schedl, E. Gómez, Masataka Goto","doi":"10.1145/2502081.2502237","DOIUrl":"https://doi.org/10.1145/2502081.2502237","url":null,"abstract":"Music is an omnipresent topic in our daily lives, as almost everyone enjoys listening to his or her favorite tunes. Music information retrieval (MIR) is a research field that aims – among other things – at automatically extracting semantically meaningful information from various representations of music entities, such as a digital audio file, a band’s web page, a song’s lyrics, or a tweet about a microblogger’s current listening activity. A key approach in MIR is to describe music via computational features, which can be categorized into: music content, music context, and user context. The music content refers to features extracted from the audio signal, while information about musical entities not encoded in the signal (e.g., image of an artist or political background of a song) are referred to as music context. The user context, in contrast, includes environmental aspects as well as physical and mental activities of the music listener. MIR research has been seeing a paradigm shift over the last couple of years, as an increasing number of recent approaches and commercial technologies combine content-based techniques (focusing on the audio signal) with multimedia context data mined, e.g. from web sources and with user context information.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85130970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Session details: Security and forensics 会话详细信息:安全性和取证
Pub Date : 2013-10-21 DOI: 10.1145/3245296
R. Cucchiara
{"title":"Session details: Security and forensics","authors":"R. Cucchiara","doi":"10.1145/3245296","DOIUrl":"https://doi.org/10.1145/3245296","url":null,"abstract":"","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"65 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84427720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Annotation 会话详细信息:
Pub Date : 2013-10-21 DOI: 10.1145/3245300
Pablo Caesar
{"title":"Session details: Annotation","authors":"Pablo Caesar","doi":"10.1145/3245300","DOIUrl":"https://doi.org/10.1145/3245300","url":null,"abstract":"","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"182 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83023226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Scene understanding 会话细节:场景理解
Pub Date : 2013-10-21 DOI: 10.1145/3245301
D. Joshi
{"title":"Session details: Scene understanding","authors":"D. Joshi","doi":"10.1145/3245301","DOIUrl":"https://doi.org/10.1145/3245301","url":null,"abstract":"","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"C-24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84427045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SentiBank: large-scale ontology and classifiers for detecting sentiment and emotions in visual content SentiBank:用于检测视觉内容中的情绪和情绪的大规模本体和分类器
Pub Date : 2013-10-21 DOI: 10.1145/2502081.2502268
Damian Borth, Tao Chen, R. Ji, Shih-Fu Chang
A picture is worth one thousand words, but what words should be used to describe the sentiment and emotions conveyed in the increasingly popular social multimedia? We demonstrate a novel system which combines sound structures from psychology and the folksonomy extracted from social multimedia to develop a large visual sentiment ontology consisting of 1,200 concepts and associated classifiers called SentiBank. Each concept, defined as an Adjective Noun Pair (ANP), is made of an adjective strongly indicating emotions and a noun corresponding to objects or scenes that have a reasonable prospect of automatic detection. We believe such large-scale visual classifiers offer a powerful mid-level semantic representation enabling high-level sentiment analysis of social multimedia. We demonstrate novel applications made possible by SentiBank including live sentiment prediction of social media and visualization of visual content in a rich intuitive semantic space.
一张图片胜过千言万语,但是应该用什么词来描述日益流行的社交多媒体所传达的情绪和情感呢?我们展示了一个新的系统,它结合了心理学的声音结构和从社交多媒体中提取的民间分类学,开发了一个由1200个概念和相关分类器组成的大型视觉情感本体,称为SentiBank。每个概念被定义为一个形容词名词对(ANP),由一个强烈表达情感的形容词和一个对应于具有合理自动检测前景的物体或场景的名词组成。我们相信这种大规模的视觉分类器提供了一种强大的中级语义表示,使社交多媒体的高级情感分析成为可能。我们展示了由SentiBank实现的新应用,包括社交媒体的实时情绪预测和丰富直观语义空间中视觉内容的可视化。
{"title":"SentiBank: large-scale ontology and classifiers for detecting sentiment and emotions in visual content","authors":"Damian Borth, Tao Chen, R. Ji, Shih-Fu Chang","doi":"10.1145/2502081.2502268","DOIUrl":"https://doi.org/10.1145/2502081.2502268","url":null,"abstract":"A picture is worth one thousand words, but what words should be used to describe the sentiment and emotions conveyed in the increasingly popular social multimedia? We demonstrate a novel system which combines sound structures from psychology and the folksonomy extracted from social multimedia to develop a large visual sentiment ontology consisting of 1,200 concepts and associated classifiers called SentiBank. Each concept, defined as an Adjective Noun Pair (ANP), is made of an adjective strongly indicating emotions and a noun corresponding to objects or scenes that have a reasonable prospect of automatic detection. We believe such large-scale visual classifiers offer a powerful mid-level semantic representation enabling high-level sentiment analysis of social multimedia. We demonstrate novel applications made possible by SentiBank including live sentiment prediction of social media and visualization of visual content in a rich intuitive semantic space.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81663418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 190
Session details: Best paper session 会议细节:最佳论文会议
Pub Date : 2013-10-21 DOI: 10.1145/3245285
R. Zimmerman
{"title":"Session details: Best paper session","authors":"R. Zimmerman","doi":"10.1145/3245285","DOIUrl":"https://doi.org/10.1145/3245285","url":null,"abstract":"","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78615638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 21st ACM international conference on Multimedia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1