Hailin Zou;Zijie Chen;Jing Zhang;Lei Wang;Fuchun Zhang;Jianqing Li;Yuanyuan Pan
{"title":"GT-WHAR:利用多个传感器进行可穿戴人体活动识别的通用图式时态框架","authors":"Hailin Zou;Zijie Chen;Jing Zhang;Lei Wang;Fuchun Zhang;Jianqing Li;Yuanyuan Pan","doi":"10.1109/TETCI.2024.3378331","DOIUrl":null,"url":null,"abstract":"Using wearable sensors to identify human activities has elicited significant interest within the discipline of ubiquitous computing for everyday facilitation. Recent research has employed hybrid models to better leverage the modal information of sensors and temporal information, enabling improved performance for wearable human activity recognition. Nevertheless, the lack of effective exploitation of human structural information and limited capacity for cross-channel fusion remains a major challenge. This study proposes a generic design, called GT-WHAR, to accommodate the varying application scenarios and datasets while performing effective feature extraction and fusion. Firstly, a novel and unified representation paradigm, namely \n<italic>Body-Sensing Graph Representation</i>\n, has been proposed to represent body movement by a graph set, which incorporates structural information by considering the intrinsic connectivity of the skeletal structure. Secondly, the newly designed \n<italic>Body-Node Attention Graph Network</i>\n employs graph neural networks to extract and fuse the cross-channel information within the graph set. Eventually, the graph network has been embedded in the proposed \n<italic>Bidirectional Temporal Learning Network</i>\n, facilitating the extraction of temporal information in conjunction with the learned structural features. GT-WHAR outperformed the state-of-the-art methods in extensive experiments conducted on benchmark datasets, proving its validity and efficacy. Besides, we have demonstrated the generality of the framework through multiple research questions and provided an in-depth investigation of various influential factors.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 6","pages":"3912-3924"},"PeriodicalIF":5.3000,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GT-WHAR: A Generic Graph-Based Temporal Framework for Wearable Human Activity Recognition With Multiple Sensors\",\"authors\":\"Hailin Zou;Zijie Chen;Jing Zhang;Lei Wang;Fuchun Zhang;Jianqing Li;Yuanyuan Pan\",\"doi\":\"10.1109/TETCI.2024.3378331\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Using wearable sensors to identify human activities has elicited significant interest within the discipline of ubiquitous computing for everyday facilitation. Recent research has employed hybrid models to better leverage the modal information of sensors and temporal information, enabling improved performance for wearable human activity recognition. Nevertheless, the lack of effective exploitation of human structural information and limited capacity for cross-channel fusion remains a major challenge. This study proposes a generic design, called GT-WHAR, to accommodate the varying application scenarios and datasets while performing effective feature extraction and fusion. Firstly, a novel and unified representation paradigm, namely \\n<italic>Body-Sensing Graph Representation</i>\\n, has been proposed to represent body movement by a graph set, which incorporates structural information by considering the intrinsic connectivity of the skeletal structure. Secondly, the newly designed \\n<italic>Body-Node Attention Graph Network</i>\\n employs graph neural networks to extract and fuse the cross-channel information within the graph set. Eventually, the graph network has been embedded in the proposed \\n<italic>Bidirectional Temporal Learning Network</i>\\n, facilitating the extraction of temporal information in conjunction with the learned structural features. GT-WHAR outperformed the state-of-the-art methods in extensive experiments conducted on benchmark datasets, proving its validity and efficacy. Besides, we have demonstrated the generality of the framework through multiple research questions and provided an in-depth investigation of various influential factors.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":\"8 6\",\"pages\":\"3912-3924\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10483025/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10483025/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
GT-WHAR: A Generic Graph-Based Temporal Framework for Wearable Human Activity Recognition With Multiple Sensors
Using wearable sensors to identify human activities has elicited significant interest within the discipline of ubiquitous computing for everyday facilitation. Recent research has employed hybrid models to better leverage the modal information of sensors and temporal information, enabling improved performance for wearable human activity recognition. Nevertheless, the lack of effective exploitation of human structural information and limited capacity for cross-channel fusion remains a major challenge. This study proposes a generic design, called GT-WHAR, to accommodate the varying application scenarios and datasets while performing effective feature extraction and fusion. Firstly, a novel and unified representation paradigm, namely
Body-Sensing Graph Representation
, has been proposed to represent body movement by a graph set, which incorporates structural information by considering the intrinsic connectivity of the skeletal structure. Secondly, the newly designed
Body-Node Attention Graph Network
employs graph neural networks to extract and fuse the cross-channel information within the graph set. Eventually, the graph network has been embedded in the proposed
Bidirectional Temporal Learning Network
, facilitating the extraction of temporal information in conjunction with the learned structural features. GT-WHAR outperformed the state-of-the-art methods in extensive experiments conducted on benchmark datasets, proving its validity and efficacy. Besides, we have demonstrated the generality of the framework through multiple research questions and provided an in-depth investigation of various influential factors.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.