{"title":"Detecting semantic concepts using context and audiovisual features","authors":"M. Naphade, Thomas S. Huang","doi":"10.1109/EVENT.2001.938871","DOIUrl":null,"url":null,"abstract":"Detection of high-level semantics from audio-visual data is a challenging multimedia understanding problem. The difficulty mainly lies in the gap that exists between low level media features and high level semantic concepts. In an attempt to bridge this gap, Naphade et al. (see Proceedings of Workshop on Content Based Access to Image and Video Libraries, p.35-39, 2000 and Proceedings of IEEE International Conference on Image Processing, Chicago, IL, vol.3, p.536-40, 1998) proposed a probabilistic framework for semantic understanding. The components of this framework are probabilistic multimedia objects and a graphical network of such objects. We show how the framework supports detection of multiple high-level concepts, which enjoy spatial and temporal-support. More importantly, we show why context matters and how it can be modeled. Using a factor graph framework, we model context and use it to improve detection of sites, objects and events. Using concepts outdoor and flying-helicopter we demonstrate how the factor graph multinet models context and uses it for late integration of multimodal features. Using ROC curves and probability of error curves we support the intuition that context should help.","PeriodicalId":375539,"journal":{"name":"Proceedings IEEE Workshop on Detection and Recognition of Events in Video","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings IEEE Workshop on Detection and Recognition of Events in Video","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EVENT.2001.938871","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21
Abstract
Detection of high-level semantics from audio-visual data is a challenging multimedia understanding problem. The difficulty mainly lies in the gap that exists between low level media features and high level semantic concepts. In an attempt to bridge this gap, Naphade et al. (see Proceedings of Workshop on Content Based Access to Image and Video Libraries, p.35-39, 2000 and Proceedings of IEEE International Conference on Image Processing, Chicago, IL, vol.3, p.536-40, 1998) proposed a probabilistic framework for semantic understanding. The components of this framework are probabilistic multimedia objects and a graphical network of such objects. We show how the framework supports detection of multiple high-level concepts, which enjoy spatial and temporal-support. More importantly, we show why context matters and how it can be modeled. Using a factor graph framework, we model context and use it to improve detection of sites, objects and events. Using concepts outdoor and flying-helicopter we demonstrate how the factor graph multinet models context and uses it for late integration of multimodal features. Using ROC curves and probability of error curves we support the intuition that context should help.