Pub Date : 2024-04-02DOI: 10.1109/TETCI.2024.3378331
Hailin Zou;Zijie Chen;Jing Zhang;Lei Wang;Fuchun Zhang;Jianqing Li;Yuanyuan Pan
Using wearable sensors to identify human activities has elicited significant interest within the discipline of ubiquitous computing for everyday facilitation. Recent research has employed hybrid models to better leverage the modal information of sensors and temporal information, enabling improved performance for wearable human activity recognition. Nevertheless, the lack of effective exploitation of human structural information and limited capacity for cross-channel fusion remains a major challenge. This study proposes a generic design, called GT-WHAR, to accommodate the varying application scenarios and datasets while performing effective feature extraction and fusion. Firstly, a novel and unified representation paradigm, namely Body-Sensing Graph Representation