{"title":"Temporal events in all dimensions and scales","authors":"M. Slaney, D. Ponceleón, James Kaufman","doi":"10.1109/EVENT.2001.938870","DOIUrl":null,"url":null,"abstract":"This paper describes a new representation for the audio and visual information in a video signal. We use reduce the dimensionality of the signals with singular-value decomposition (SVD) or mel-frequency cepstral coefficients (MFCC). We apply these transforms to word, (word transcript, semantic space or latent semantic indexing), image (color histogram data) and audio (timbre) data. Using scale-space techniques we find large jumps in a video's path, which are evidence for events. We use these techniques to analyze the temporal properties of the audio and image data in a video. This analysis creates a hierarchical segmentation of the video, or a table-of-contents, from both audio and the image data.","PeriodicalId":375539,"journal":{"name":"Proceedings IEEE Workshop on Detection and Recognition of Events in Video","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings IEEE Workshop on Detection and Recognition of Events in Video","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EVENT.2001.938870","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper describes a new representation for the audio and visual information in a video signal. We use reduce the dimensionality of the signals with singular-value decomposition (SVD) or mel-frequency cepstral coefficients (MFCC). We apply these transforms to word, (word transcript, semantic space or latent semantic indexing), image (color histogram data) and audio (timbre) data. Using scale-space techniques we find large jumps in a video's path, which are evidence for events. We use these techniques to analyze the temporal properties of the audio and image data in a video. This analysis creates a hierarchical segmentation of the video, or a table-of-contents, from both audio and the image data.