{"title":"MU-MAE: 基于多模态屏蔽自动编码器的单次学习","authors":"Rex Liu, Xin Liu","doi":"arxiv-2408.04243","DOIUrl":null,"url":null,"abstract":"With the exponential growth of multimedia data, leveraging multimodal sensors\npresents a promising approach for improving accuracy in human activity\nrecognition. Nevertheless, accurately identifying these activities using both\nvideo data and wearable sensor data presents challenges due to the\nlabor-intensive data annotation, and reliance on external pretrained models or\nadditional data. To address these challenges, we introduce Multimodal Masked\nAutoencoders-Based One-Shot Learning (Mu-MAE). Mu-MAE integrates a multimodal\nmasked autoencoder with a synchronized masking strategy tailored for wearable\nsensors. This masking strategy compels the networks to capture more meaningful\nspatiotemporal features, which enables effective self-supervised pretraining\nwithout the need for external data. Furthermore, Mu-MAE leverages the\nrepresentation extracted from multimodal masked autoencoders as prior\ninformation input to a cross-attention multimodal fusion layer. This fusion\nlayer emphasizes spatiotemporal features requiring attention across different\nmodalities while highlighting differences from other classes, aiding in the\nclassification of various classes in metric-based one-shot learning.\nComprehensive evaluations on MMAct one-shot classification show that Mu-MAE\noutperforms all the evaluated approaches, achieving up to an 80.17% accuracy\nfor five-way one-shot multimodal classification, without the use of additional\ndata.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"65 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MU-MAE: Multimodal Masked Autoencoders-Based One-Shot Learning\",\"authors\":\"Rex Liu, Xin Liu\",\"doi\":\"arxiv-2408.04243\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the exponential growth of multimedia data, leveraging multimodal sensors\\npresents a promising approach for improving accuracy in human activity\\nrecognition. Nevertheless, accurately identifying these activities using both\\nvideo data and wearable sensor data presents challenges due to the\\nlabor-intensive data annotation, and reliance on external pretrained models or\\nadditional data. To address these challenges, we introduce Multimodal Masked\\nAutoencoders-Based One-Shot Learning (Mu-MAE). Mu-MAE integrates a multimodal\\nmasked autoencoder with a synchronized masking strategy tailored for wearable\\nsensors. This masking strategy compels the networks to capture more meaningful\\nspatiotemporal features, which enables effective self-supervised pretraining\\nwithout the need for external data. Furthermore, Mu-MAE leverages the\\nrepresentation extracted from multimodal masked autoencoders as prior\\ninformation input to a cross-attention multimodal fusion layer. This fusion\\nlayer emphasizes spatiotemporal features requiring attention across different\\nmodalities while highlighting differences from other classes, aiding in the\\nclassification of various classes in metric-based one-shot learning.\\nComprehensive evaluations on MMAct one-shot classification show that Mu-MAE\\noutperforms all the evaluated approaches, achieving up to an 80.17% accuracy\\nfor five-way one-shot multimodal classification, without the use of additional\\ndata.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"65 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.04243\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04243","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
With the exponential growth of multimedia data, leveraging multimodal sensors
presents a promising approach for improving accuracy in human activity
recognition. Nevertheless, accurately identifying these activities using both
video data and wearable sensor data presents challenges due to the
labor-intensive data annotation, and reliance on external pretrained models or
additional data. To address these challenges, we introduce Multimodal Masked
Autoencoders-Based One-Shot Learning (Mu-MAE). Mu-MAE integrates a multimodal
masked autoencoder with a synchronized masking strategy tailored for wearable
sensors. This masking strategy compels the networks to capture more meaningful
spatiotemporal features, which enables effective self-supervised pretraining
without the need for external data. Furthermore, Mu-MAE leverages the
representation extracted from multimodal masked autoencoders as prior
information input to a cross-attention multimodal fusion layer. This fusion
layer emphasizes spatiotemporal features requiring attention across different
modalities while highlighting differences from other classes, aiding in the
classification of various classes in metric-based one-shot learning.
Comprehensive evaluations on MMAct one-shot classification show that Mu-MAE
outperforms all the evaluated approaches, achieving up to an 80.17% accuracy
for five-way one-shot multimodal classification, without the use of additional
data.