Biyun Sheng, Rui Han, Fu Xiao, Zhengxin Guo, Linqing Gui
{"title":"MetaFormer","authors":"Biyun Sheng, Rui Han, Fu Xiao, Zhengxin Guo, Linqing Gui","doi":"10.1145/3643550","DOIUrl":null,"url":null,"abstract":"WiFi based action recognition has attracted increasing attentions due to its convenience and universality in real-world applications, whereas the domain dependency leads to poor generalization ability towards new sensing environments or subjects. The majority of existing solutions fail to sufficiently extract action-related features from WiFi signals. Moreover, they are unable to make full use of the target data with only the labelled samples taken into consideration. To cope with these issues, we propose a WiFi-based sensing system, MetaFormer, which can effectively recognize actions from unseen domains with only one labelled target sample per category. Specifically, MetaFormer achieves this by firstly constructing a novel spatial-temporal transformer feature extraction structure with dense-sparse input named DS-STT to capture action primary and affiliated movements. It then designs Meta-teacher framework which meta-pre-trains source tasks and updates model parameters by dynamic pseudo label enhancement to bridge the relationship among the labelled and unlabelled target samples. In order to validate the performance of MetaFormer, we conduct comprehensive evaluations on SignFi, Widar and Wiar datasets and achieve superior performances under the one-shot case.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3643550","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
WiFi based action recognition has attracted increasing attentions due to its convenience and universality in real-world applications, whereas the domain dependency leads to poor generalization ability towards new sensing environments or subjects. The majority of existing solutions fail to sufficiently extract action-related features from WiFi signals. Moreover, they are unable to make full use of the target data with only the labelled samples taken into consideration. To cope with these issues, we propose a WiFi-based sensing system, MetaFormer, which can effectively recognize actions from unseen domains with only one labelled target sample per category. Specifically, MetaFormer achieves this by firstly constructing a novel spatial-temporal transformer feature extraction structure with dense-sparse input named DS-STT to capture action primary and affiliated movements. It then designs Meta-teacher framework which meta-pre-trains source tasks and updates model parameters by dynamic pseudo label enhancement to bridge the relationship among the labelled and unlabelled target samples. In order to validate the performance of MetaFormer, we conduct comprehensive evaluations on SignFi, Widar and Wiar datasets and achieve superior performances under the one-shot case.