{"title":"Multi-mode Semantic Cues in Soccer Video","authors":"Yu Wang, Yu Cao, Miao Wang, Gang Liu","doi":"10.14257/ASTL.2015.111.30","DOIUrl":null,"url":null,"abstract":"A new framework based on multimodal semantic clues and HCRF (Hidden Conditional Random Field) for soccer wonderful event detection. Through analysis of the structural semantics of the wonderful event videos, define nine kinds of multimodal semantic clues to accurately describe the included semantic information of the wonderful events. After splitting the video clips into several physical shots, extract the multimodal semantic clues from the key frame of each shot to get the feature vector of the current shots, and compose the observed sequence of the feature vectors of all shots in the test video clips. Using the above observed sequence as HCRF model input in the case of small-scale training samples, establish a wonderful event detection HCRF model effectively.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"33 1","pages":"156-160"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 17th International Conference on Computer and Information Technology (ICCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14257/ASTL.2015.111.30","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A new framework based on multimodal semantic clues and HCRF (Hidden Conditional Random Field) for soccer wonderful event detection. Through analysis of the structural semantics of the wonderful event videos, define nine kinds of multimodal semantic clues to accurately describe the included semantic information of the wonderful events. After splitting the video clips into several physical shots, extract the multimodal semantic clues from the key frame of each shot to get the feature vector of the current shots, and compose the observed sequence of the feature vectors of all shots in the test video clips. Using the above observed sequence as HCRF model input in the case of small-scale training samples, establish a wonderful event detection HCRF model effectively.