{"title":"Multi-DGI: Multi-head Pooling Deep Graph Infomax for Human Activity Recognition","authors":"Yifan Chen, Haiqi Zhu, Zhiyuan Chen","doi":"10.1007/s11036-024-02306-y","DOIUrl":null,"url":null,"abstract":"<p>Human Activity Recognition (HAR) is a crucial research domain with substantial real-world implications. Despite the extensive application of machine learning techniques in various domains, most traditional models neglect the inherent spatio-temporal relationships within time-series data. To address this limitation, we propose an unsupervised Graph Representation Learning (GRL) model named Multi-head Pooling Deep Graph Infomax (Multi-DGI), which is applied to reveal the spatio-temporal patterns from the graph-structured HAR data. By employing an adaptive Multi-head Pooling mechanism, Multi-DGI captures comprehensive graph summaries, furnishing general embeddings for downstream classifiers, thereby reducing dependence on graph constructions. Using the UCI WISDM dataset and three basic graph construction methods, Multi-DGI delivers a minimum enhancement of 2.9%, 1.0%, 7.5%, and 6.4% in Accuracy, Precision, Recall, and Macro-F1 scores, respectively. The demonstrated robustness of Multi-DGI in extracting intricate patterns from rudimentary graphs reduces the dependence of GRL on high-quality graphs, thereby broadening its applicability in time-series analysis. Our code and data are available at https://github.com/AnguoCYF/Multi-DGI.</p>","PeriodicalId":501103,"journal":{"name":"Mobile Networks and Applications","volume":"18 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mobile Networks and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11036-024-02306-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Human Activity Recognition (HAR) is a crucial research domain with substantial real-world implications. Despite the extensive application of machine learning techniques in various domains, most traditional models neglect the inherent spatio-temporal relationships within time-series data. To address this limitation, we propose an unsupervised Graph Representation Learning (GRL) model named Multi-head Pooling Deep Graph Infomax (Multi-DGI), which is applied to reveal the spatio-temporal patterns from the graph-structured HAR data. By employing an adaptive Multi-head Pooling mechanism, Multi-DGI captures comprehensive graph summaries, furnishing general embeddings for downstream classifiers, thereby reducing dependence on graph constructions. Using the UCI WISDM dataset and three basic graph construction methods, Multi-DGI delivers a minimum enhancement of 2.9%, 1.0%, 7.5%, and 6.4% in Accuracy, Precision, Recall, and Macro-F1 scores, respectively. The demonstrated robustness of Multi-DGI in extracting intricate patterns from rudimentary graphs reduces the dependence of GRL on high-quality graphs, thereby broadening its applicability in time-series analysis. Our code and data are available at https://github.com/AnguoCYF/Multi-DGI.