VERL: An Ontology Framework for Representing and Annotating Video Events

IEEE Multim. Pub Date : 2005-10-01 DOI:10.1109/MMUL.2005.87
A. François, R. Nevatia, Jerry R. Hobbs, R. Bolles
{"title":"VERL: An Ontology Framework for Representing and Annotating Video Events","authors":"A. François, R. Nevatia, Jerry R. Hobbs, R. Bolles","doi":"10.1109/MMUL.2005.87","DOIUrl":null,"url":null,"abstract":"The notion of \"events\" is extremely important in characterizing the contents of video. An event is typically triggered by some kind of change of state captured in the video, such as when an object starts moving. The ability to reason with events is a critical step toward video understanding. This article describes the findings of a recent workshop series that has produced an ontology framework for representing video events-called Video Event Representation Language (VERL) -and a companion annotation framework, called Video Event Markup Language (VEML). One of the key concepts in this work is the modeling of events as composable, whereby complex events are constructed from simpler events by operations such as sequencing, iteration, and alternation. The article presents an extensible event and object ontology expressed in VERL and discusses a detailed example of applying VERL and VEML to the description of a \"tailgating\" event in surveillance video.","PeriodicalId":290893,"journal":{"name":"IEEE Multim.","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"235","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Multim.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMUL.2005.87","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 235

Abstract

The notion of "events" is extremely important in characterizing the contents of video. An event is typically triggered by some kind of change of state captured in the video, such as when an object starts moving. The ability to reason with events is a critical step toward video understanding. This article describes the findings of a recent workshop series that has produced an ontology framework for representing video events-called Video Event Representation Language (VERL) -and a companion annotation framework, called Video Event Markup Language (VEML). One of the key concepts in this work is the modeling of events as composable, whereby complex events are constructed from simpler events by operations such as sequencing, iteration, and alternation. The article presents an extensible event and object ontology expressed in VERL and discusses a detailed example of applying VERL and VEML to the description of a "tailgating" event in surveillance video.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
VERL:表示和注释视频事件的本体框架
“事件”的概念在描述视频内容时极为重要。事件通常由视频中捕获的某种状态变化触发,例如当物体开始移动时。对事件进行推理的能力是理解视频的关键一步。本文描述了最近一个研讨会系列的发现,该研讨会系列产生了一个用于表示视频事件的本体框架(称为视频事件表示语言(VERL))和一个配套的注释框架(称为视频事件标记语言(VEML))。这项工作中的一个关键概念是将事件建模为可组合的,即通过排序、迭代和交替等操作从简单事件构建复杂事件。本文提出了一个用VERL表达的可扩展事件和对象本体,并详细讨论了将VERL和VEML应用于监控视频中“尾随”事件描述的示例。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Recent Advances in Immersive Multimedia Welcome to the New Team Members Passing the Torch - Continue Moving Forward Multimedia Meets Deep Reinforcement Learning Digital Assets and Blockchain-Based Multimedia Data Management
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1