Miao Yu, Weizhe Zhang, Qingxiang Zeng, Chao Wang, Jie Li
{"title":"Human-Object Contour for Action Recognition with Attentional Multi-modal Fusion Network","authors":"Miao Yu, Weizhe Zhang, Qingxiang Zeng, Chao Wang, Jie Li","doi":"10.1109/ICAIIC.2019.8669069","DOIUrl":null,"url":null,"abstract":"Human action recognition has great research and application value in intelligent video surveillance, human-computer interaction and other communication fields. In order to improve the accuracy of human action recognition for video understanding, the extraction of human motion features and attentional fusion methods are studied. This paper has two main contributions. Firstly, based on the essence of optical flow validity, a novel dynamic feature expression method called Human-Object Contour(HOC) is presented, which combines object understanding and contextual information. Secondly, referring to the principle of Stacking in ensemble learning, we propose Attentional Multi-modal Fusion Network(AMFN). According to the characteristics of the video, attention is paid to selecting different modalities rather than simple averaging with fixed weight. The experiment shows that HOC is effectively complementary to the static appearance feature, and the accuracy of action recognition with our fusion network improves effectively. Our approach obtains the state-of-the-art performance on the datasets of HMDB51 (72.2%) and UCF101 (96.0%).","PeriodicalId":273383,"journal":{"name":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIIC.2019.8669069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Human action recognition has great research and application value in intelligent video surveillance, human-computer interaction and other communication fields. In order to improve the accuracy of human action recognition for video understanding, the extraction of human motion features and attentional fusion methods are studied. This paper has two main contributions. Firstly, based on the essence of optical flow validity, a novel dynamic feature expression method called Human-Object Contour(HOC) is presented, which combines object understanding and contextual information. Secondly, referring to the principle of Stacking in ensemble learning, we propose Attentional Multi-modal Fusion Network(AMFN). According to the characteristics of the video, attention is paid to selecting different modalities rather than simple averaging with fixed weight. The experiment shows that HOC is effectively complementary to the static appearance feature, and the accuracy of action recognition with our fusion network improves effectively. Our approach obtains the state-of-the-art performance on the datasets of HMDB51 (72.2%) and UCF101 (96.0%).