Hai-Hong Phan, T. T. Nguyen, Huu Phuc Ngo, Huu-Nhan Nguyen, Do Minh Hieu, Cao Truong Tran, Bao Ngoc Vi
{"title":"Key frame and skeleton extraction for deep learning-based human action recognition","authors":"Hai-Hong Phan, T. T. Nguyen, Huu Phuc Ngo, Huu-Nhan Nguyen, Do Minh Hieu, Cao Truong Tran, Bao Ngoc Vi","doi":"10.1109/RIVF51545.2021.9642132","DOIUrl":null,"url":null,"abstract":"In this paper, we propose an efficient approach for activity recognition in videos with key frame extraction and deep learning architectures, named KFSENet. First, we propose a key frame selection technique in a motion sequence of 2D frames based on gradient of optical flow to select the most important frames which characterize different actions. From these frames, we extract key points using pose estimation techniques and employ them further in an efficient Deep learning network to learn the action model. In this way, the proposed method be able to remove insignificant frames and decrease the length of the motion vector. We only consider the remaining essential informative frames in the process of action recognition, thus the proposed method is sufficiently fast and robust. We evaluate the proposed method intensively on public dataset named UCF Sport and our self-built HNH dataset in our experiments. We verify that our proposed algorithm receive state-of-the-art on these datasets.","PeriodicalId":6860,"journal":{"name":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","volume":"24 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 RIVF International Conference on Computing and Communication Technologies (RIVF)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RIVF51545.2021.9642132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this paper, we propose an efficient approach for activity recognition in videos with key frame extraction and deep learning architectures, named KFSENet. First, we propose a key frame selection technique in a motion sequence of 2D frames based on gradient of optical flow to select the most important frames which characterize different actions. From these frames, we extract key points using pose estimation techniques and employ them further in an efficient Deep learning network to learn the action model. In this way, the proposed method be able to remove insignificant frames and decrease the length of the motion vector. We only consider the remaining essential informative frames in the process of action recognition, thus the proposed method is sufficiently fast and robust. We evaluate the proposed method intensively on public dataset named UCF Sport and our self-built HNH dataset in our experiments. We verify that our proposed algorithm receive state-of-the-art on these datasets.