{"title":"Multi-information Complementarity Neural Networks for Multi-Modal Action Recognition","authors":"Chuang Ding, Y. Tie, L. Qi","doi":"10.1109/ISNE.2019.8896415","DOIUrl":null,"url":null,"abstract":"Multi-modal methods play an important role on action recognition. Each modal can extract different features to analyze the same motion classification. But numbers of researches always separate the one task from the others, which cause the unreasonable utilization of complementary information in the multi-modality data. Skeleton is robust to the variation of illumination, backgrounds and viewpoints, while RGB has better performance in some circumstances when there are other objects that have great effect on recognition of action, such as drinking water and eating snacks. In this paper, we propose a novel Multi-information Complementarity Neural Network (MiCNN) for human action recognition to address this problem. The proposed MiCNN can learn the features from both skeleton and RGB data to ensure the abundance of information. Besides, we design a weighted fusion block to distribute the weights reasonably, which can make each modal draw on their respective strengths. The experiments on NTU RGB-D datasets demonstrate the excellent performance of our scheme, which are superior to other methods that we have ever known.","PeriodicalId":405565,"journal":{"name":"2019 8th International Symposium on Next Generation Electronics (ISNE)","volume":"351 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 8th International Symposium on Next Generation Electronics (ISNE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISNE.2019.8896415","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Multi-modal methods play an important role on action recognition. Each modal can extract different features to analyze the same motion classification. But numbers of researches always separate the one task from the others, which cause the unreasonable utilization of complementary information in the multi-modality data. Skeleton is robust to the variation of illumination, backgrounds and viewpoints, while RGB has better performance in some circumstances when there are other objects that have great effect on recognition of action, such as drinking water and eating snacks. In this paper, we propose a novel Multi-information Complementarity Neural Network (MiCNN) for human action recognition to address this problem. The proposed MiCNN can learn the features from both skeleton and RGB data to ensure the abundance of information. Besides, we design a weighted fusion block to distribute the weights reasonably, which can make each modal draw on their respective strengths. The experiments on NTU RGB-D datasets demonstrate the excellent performance of our scheme, which are superior to other methods that we have ever known.