{"title":"Novel Human Activity Recognition by graph engineered ensemble deep learning model","authors":"Mamta Ghalan, Rajesh Kumar Aggarwal","doi":"10.1016/j.ifacsc.2024.100253","DOIUrl":null,"url":null,"abstract":"<div><p>This research delves into the domain of Human Activity Recognition (HAR) through sensor data analysis, offering a comprehensive exploration of three diverse datasets: UniMiB-SHAR, Motion Sense, and WISDM Actitracker. The UniMiB-SHAR dataset encompasses a diverse array of linear as well as non-linear and complex activities which involve the movement of more than one joint or muscle (for example Hitting Obstacles, jogging and falling with face down). This motion generates highly correlated sensor readings over a certain period of time. In this case, Convolution Neural Networks (CNNs) are effective in feature extraction as well as classification of HAR activities, but they may not fully grasp the combined features of spatial as well as temporal aspects in the HAR datasets and heavily rely on labelled data. Whereas, Graph convolution networks (GCN), with their capacity to model complex interactions through graph structure, complement CNN’s capabilities in classifying non-linear activities in the HAR dataset. By leveraging the Knowledge graph structure and acquiring the feature embeddings from the GCN model, in this study, a Noval ensemble CNN model is proposed for the classification of activities. The novel HAR pipeline is termed as Graph Engineered EnsemCNN HAR (GE-EnsemCNN-HAR) and its performance is evaluated on HAR datasets. Proposed model demonstrated a noteworthy accuracy of 93.5% on UniMiB-SHAR dataset, surpassing the Shallow CNN model with GNN with an improvement of 20.14%. The proposed model achieved a notable accuracy rate of 96.18% and 98% when evaluated on the Motion Sense and WISDM Actitracker dataset.</p></div>","PeriodicalId":29926,"journal":{"name":"IFAC Journal of Systems and Control","volume":"27 ","pages":"Article 100253"},"PeriodicalIF":1.8000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IFAC Journal of Systems and Control","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468601824000142","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
This research delves into the domain of Human Activity Recognition (HAR) through sensor data analysis, offering a comprehensive exploration of three diverse datasets: UniMiB-SHAR, Motion Sense, and WISDM Actitracker. The UniMiB-SHAR dataset encompasses a diverse array of linear as well as non-linear and complex activities which involve the movement of more than one joint or muscle (for example Hitting Obstacles, jogging and falling with face down). This motion generates highly correlated sensor readings over a certain period of time. In this case, Convolution Neural Networks (CNNs) are effective in feature extraction as well as classification of HAR activities, but they may not fully grasp the combined features of spatial as well as temporal aspects in the HAR datasets and heavily rely on labelled data. Whereas, Graph convolution networks (GCN), with their capacity to model complex interactions through graph structure, complement CNN’s capabilities in classifying non-linear activities in the HAR dataset. By leveraging the Knowledge graph structure and acquiring the feature embeddings from the GCN model, in this study, a Noval ensemble CNN model is proposed for the classification of activities. The novel HAR pipeline is termed as Graph Engineered EnsemCNN HAR (GE-EnsemCNN-HAR) and its performance is evaluated on HAR datasets. Proposed model demonstrated a noteworthy accuracy of 93.5% on UniMiB-SHAR dataset, surpassing the Shallow CNN model with GNN with an improvement of 20.14%. The proposed model achieved a notable accuracy rate of 96.18% and 98% when evaluated on the Motion Sense and WISDM Actitracker dataset.