{"title":"Multi-visual information fusion and aggregation for video action classification","authors":"Xuchao Gong, Zongmin Li, Xiangdong Wang","doi":"10.1117/12.2644312","DOIUrl":null,"url":null,"abstract":"In order to fully mine the performance improvement of spatio-temporal features in video action classification, we propose a multi-visual information fusion time sequence prediction network (MI-TPN) which based on the feature aggregation model ActionVLAD. The method includes three parts: multi-visual information fusion, time sequence feature modeling and spatiotemporal feature aggregation. In the multi-visual information fusion, the RGB features and optical flow features are combined, the visual context and action description details are fully considered. In time sequence feature modeling, the temporal relationship is modeled by LSTM to obtain the importance measurement between temporal description features. Finally, in feature aggregation, time step feature and spatiotemporal center attention mechanism are used to aggregate features and projected them into a common feature space. This method obtains good results on three commonly used comparative datasets UCF101, HMDB51 and Something.","PeriodicalId":314555,"journal":{"name":"International Conference on Digital Image Processing","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Digital Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2644312","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In order to fully mine the performance improvement of spatio-temporal features in video action classification, we propose a multi-visual information fusion time sequence prediction network (MI-TPN) which based on the feature aggregation model ActionVLAD. The method includes three parts: multi-visual information fusion, time sequence feature modeling and spatiotemporal feature aggregation. In the multi-visual information fusion, the RGB features and optical flow features are combined, the visual context and action description details are fully considered. In time sequence feature modeling, the temporal relationship is modeled by LSTM to obtain the importance measurement between temporal description features. Finally, in feature aggregation, time step feature and spatiotemporal center attention mechanism are used to aggregate features and projected them into a common feature space. This method obtains good results on three commonly used comparative datasets UCF101, HMDB51 and Something.