Pub Date : 2001-07-08DOI: 10.1109/EVENT.2001.938862
D. Zotkin, R. Duraiswami, L. Davis
Determining the occurrence of an event is fundamental to developing systems that can observe and react to them. Often, this determination is based on collecting video and/or audio data and determining the state or location of a tracked object. We use Bayesian inference and the particle filter for tracking moving objects, using both video data obtained from multiple cameras and audio data obtained using arrays of microphones. The algorithms developed are applied to determining events arising in two fields of application. In the first, the behavior of a flying echo locating bat as it approaches a moving prey is studied, and the events of search, approach and capture are detected. In a second application we describe detection of turn-taking in a conversation between possibly moving participants recorded using a smart video conferencing setup.
{"title":"Multimodal 3-D tracking and event detection via the particle filter","authors":"D. Zotkin, R. Duraiswami, L. Davis","doi":"10.1109/EVENT.2001.938862","DOIUrl":"https://doi.org/10.1109/EVENT.2001.938862","url":null,"abstract":"Determining the occurrence of an event is fundamental to developing systems that can observe and react to them. Often, this determination is based on collecting video and/or audio data and determining the state or location of a tracked object. We use Bayesian inference and the particle filter for tracking moving objects, using both video data obtained from multiple cameras and audio data obtained using arrays of microphones. The algorithms developed are applied to determining events arising in two fields of application. In the first, the behavior of a flying echo locating bat as it approaches a moving prey is studied, and the events of search, approach and capture are detected. In a second application we describe detection of turn-taking in a conversation between possibly moving participants recorded using a smart video conferencing setup.","PeriodicalId":375539,"journal":{"name":"Proceedings IEEE Workshop on Detection and Recognition of Events in Video","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115700286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/EVENT.2001.938871
M. Naphade, Thomas S. Huang
Detection of high-level semantics from audio-visual data is a challenging multimedia understanding problem. The difficulty mainly lies in the gap that exists between low level media features and high level semantic concepts. In an attempt to bridge this gap, Naphade et al. (see Proceedings of Workshop on Content Based Access to Image and Video Libraries, p.35-39, 2000 and Proceedings of IEEE International Conference on Image Processing, Chicago, IL, vol.3, p.536-40, 1998) proposed a probabilistic framework for semantic understanding. The components of this framework are probabilistic multimedia objects and a graphical network of such objects. We show how the framework supports detection of multiple high-level concepts, which enjoy spatial and temporal-support. More importantly, we show why context matters and how it can be modeled. Using a factor graph framework, we model context and use it to improve detection of sites, objects and events. Using concepts outdoor and flying-helicopter we demonstrate how the factor graph multinet models context and uses it for late integration of multimodal features. Using ROC curves and probability of error curves we support the intuition that context should help.
{"title":"Detecting semantic concepts using context and audiovisual features","authors":"M. Naphade, Thomas S. Huang","doi":"10.1109/EVENT.2001.938871","DOIUrl":"https://doi.org/10.1109/EVENT.2001.938871","url":null,"abstract":"Detection of high-level semantics from audio-visual data is a challenging multimedia understanding problem. The difficulty mainly lies in the gap that exists between low level media features and high level semantic concepts. In an attempt to bridge this gap, Naphade et al. (see Proceedings of Workshop on Content Based Access to Image and Video Libraries, p.35-39, 2000 and Proceedings of IEEE International Conference on Image Processing, Chicago, IL, vol.3, p.536-40, 1998) proposed a probabilistic framework for semantic understanding. The components of this framework are probabilistic multimedia objects and a graphical network of such objects. We show how the framework supports detection of multiple high-level concepts, which enjoy spatial and temporal-support. More importantly, we show why context matters and how it can be modeled. Using a factor graph framework, we model context and use it to improve detection of sites, objects and events. Using concepts outdoor and flying-helicopter we demonstrate how the factor graph multinet models context and uses it for late integration of multimodal features. Using ROC curves and probability of error curves we support the intuition that context should help.","PeriodicalId":375539,"journal":{"name":"Proceedings IEEE Workshop on Detection and Recognition of Events in Video","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123713075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}