Ontology-coupled active contours for dynamic video scene understanding

J. Olszewska, T. McCluskey
{"title":"Ontology-coupled active contours for dynamic video scene understanding","authors":"J. Olszewska, T. McCluskey","doi":"10.1109/INES.2011.5954775","DOIUrl":null,"url":null,"abstract":"In this paper, we present an innovative approach coupling active contours with an ontological representation of knowledge, in order to understand scenes acquired by a moving camera and containing multiple non-rigid objects evolving over space and time. The developed active contours enable both segmentation and tracking of multiple targets in each captured scene over a video sequence with unknown camera calibration. Hence, this active contour technique provides information on the objects of interest as well as on parts of them (e.g. shape and position), and contains simultaneously low-level characteristics such as intensity or color features. The ontology we propose consists of concepts whose hierarchical levels map the granularity of the studied scene and of a set of inter- and intra-object spatial and temporal relations defined for this framework, object and sub-object characteristics e.g. shape, and visual concepts like color. The system obtained by coupling this ontology with active contours can study dynamic scenes at different levels of granularity, both numerically and semantically characterize each scene and its components i.e. objects of interest, and reason about spatiotemporal relations between them or parts of them. This resulting knowledge-based vision system was demonstrated on real-world video sequences containing multiple mobile highly-deformable objects.","PeriodicalId":414812,"journal":{"name":"2011 15th IEEE International Conference on Intelligent Engineering Systems","volume":"287 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"32","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 15th IEEE International Conference on Intelligent Engineering Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INES.2011.5954775","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 32

Abstract

In this paper, we present an innovative approach coupling active contours with an ontological representation of knowledge, in order to understand scenes acquired by a moving camera and containing multiple non-rigid objects evolving over space and time. The developed active contours enable both segmentation and tracking of multiple targets in each captured scene over a video sequence with unknown camera calibration. Hence, this active contour technique provides information on the objects of interest as well as on parts of them (e.g. shape and position), and contains simultaneously low-level characteristics such as intensity or color features. The ontology we propose consists of concepts whose hierarchical levels map the granularity of the studied scene and of a set of inter- and intra-object spatial and temporal relations defined for this framework, object and sub-object characteristics e.g. shape, and visual concepts like color. The system obtained by coupling this ontology with active contours can study dynamic scenes at different levels of granularity, both numerically and semantically characterize each scene and its components i.e. objects of interest, and reason about spatiotemporal relations between them or parts of them. This resulting knowledge-based vision system was demonstrated on real-world video sequences containing multiple mobile highly-deformable objects.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于动态视频场景理解的本体耦合活动轮廓
在本文中,我们提出了一种创新的方法,将活动轮廓与知识的本体论表示相结合,以理解由移动摄像机获取的场景,并包含多个随空间和时间演变的非刚性物体。开发的活动轮廓使分割和跟踪多个目标在每个捕获场景的视频序列与未知的摄像机校准。因此,这种主动轮廓技术不仅提供感兴趣对象的信息,还提供其部分信息(例如形状和位置),同时包含低层次特征,如强度或颜色特征。我们提出的本体由概念组成,这些概念的层次层次映射了所研究场景的粒度,以及为该框架定义的一组对象间和对象内的空间和时间关系,对象和子对象特征(如形状)以及视觉概念(如颜色)。将该本体与活动轮廓相结合得到的系统可以对不同粒度层次的动态场景进行研究,从数字和语义上对每个场景及其组成部分(即感兴趣的对象)进行表征,并推理出它们之间或其中部分之间的时空关系。该基于知识的视觉系统在包含多个移动高度可变形物体的真实视频序列中进行了演示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Algorithms for pitch distance determination Ontology-coupled active contours for dynamic video scene understanding Linear octapolar radiofrequency tool for liver ablation Integrated approach to course and engineering model for automation related topics 3DOF drawing robot using LEGO-NXT
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1