基于语义描述符的地面真值数据收集的常识知识

V. Lombardo, R. Damiano
{"title":"基于语义描述符的地面真值数据收集的常识知识","authors":"V. Lombardo, R. Damiano","doi":"10.1109/ISM.2012.23","DOIUrl":null,"url":null,"abstract":"The coverage of the semantic gap in video indexing and retrieval has gone through a continuous increase of the vocabulary of high - level features or semantic descriptors, sometimes organized in light - scale, corpus - specific, computational ontologies. This paper presents a computer - supported manual annotation method that relies on a very large scale, shared, commonsense ontologies for the selection of semantic descriptors. The ontological terms are accessed through a linguistic interface that relies on multi - lingual dictionaries and action/event template structures (or frames). The manual generation or check of annotations provides ground truth data for evaluation purposes and training data for knowledge acquisition. The novelty of the approach relies on the use of widely shared large - scale ontologies, that prevent arbitrariness of annotation and favor interoperability. We test the viability of the approach by carrying out some user studies on the annotation of narrative videos.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"9 Suppl 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Commonsense Knowledge for the Collection of Ground Truth Data on Semantic Descriptors\",\"authors\":\"V. Lombardo, R. Damiano\",\"doi\":\"10.1109/ISM.2012.23\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The coverage of the semantic gap in video indexing and retrieval has gone through a continuous increase of the vocabulary of high - level features or semantic descriptors, sometimes organized in light - scale, corpus - specific, computational ontologies. This paper presents a computer - supported manual annotation method that relies on a very large scale, shared, commonsense ontologies for the selection of semantic descriptors. The ontological terms are accessed through a linguistic interface that relies on multi - lingual dictionaries and action/event template structures (or frames). The manual generation or check of annotations provides ground truth data for evaluation purposes and training data for knowledge acquisition. The novelty of the approach relies on the use of widely shared large - scale ontologies, that prevent arbitrariness of annotation and favor interoperability. We test the viability of the approach by carrying out some user studies on the annotation of narrative videos.\",\"PeriodicalId\":282528,\"journal\":{\"name\":\"2012 IEEE International Symposium on Multimedia\",\"volume\":\"9 Suppl 2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-12-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE International Symposium on Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISM.2012.23\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Symposium on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISM.2012.23","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

视频索引和检索中语义缺口的覆盖经历了高层次特征或语义描述符词汇量的不断增加,有时组织在轻尺度、特定于语料库的计算本体中。本文提出了一种计算机支持的人工标注方法,该方法依赖于一个非常大规模的、共享的、常识性的本体来选择语义描述符。本体术语通过依赖于多语言字典和动作/事件模板结构(或框架)的语言接口进行访问。手动生成或检查注释提供了用于评估目的的真实数据和用于知识获取的训练数据。该方法的新颖性依赖于广泛共享的大规模本体的使用,这防止了注释的随意性并有利于互操作性。我们通过对叙事视频的注释进行一些用户研究来测试该方法的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Commonsense Knowledge for the Collection of Ground Truth Data on Semantic Descriptors
The coverage of the semantic gap in video indexing and retrieval has gone through a continuous increase of the vocabulary of high - level features or semantic descriptors, sometimes organized in light - scale, corpus - specific, computational ontologies. This paper presents a computer - supported manual annotation method that relies on a very large scale, shared, commonsense ontologies for the selection of semantic descriptors. The ontological terms are accessed through a linguistic interface that relies on multi - lingual dictionaries and action/event template structures (or frames). The manual generation or check of annotations provides ground truth data for evaluation purposes and training data for knowledge acquisition. The novelty of the approach relies on the use of widely shared large - scale ontologies, that prevent arbitrariness of annotation and favor interoperability. We test the viability of the approach by carrying out some user studies on the annotation of narrative videos.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Detailed Comparative Analysis of VP8 and H.264 Enhancing the MST-CSS Representation Using Robust Geometric Features, for Efficient Content Based Video Retrieval (CBVR) A Standardized Metadata Set for Annotation of Virtual and Remote Laboratories Using Wavelets and Gaussian Mixture Models for Audio Classification A Data Aware Admission Control Technique for Social Live Streams (SOLISs)
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1