通过 VICReg 进行自我监督学习,利用标签不明确的连续数据训练肌电图模式识别能力

Shriram Tallam Puranam Raghu, Dawn T. MacIsaac, Erik J. Scheme
{"title":"通过 VICReg 进行自我监督学习,利用标签不明确的连续数据训练肌电图模式识别能力","authors":"Shriram Tallam Puranam Raghu, Dawn T. MacIsaac, Erik J. Scheme","doi":"arxiv-2409.11632","DOIUrl":null,"url":null,"abstract":"In this study, we investigate the application of self-supervised learning via\npre-trained Long Short-Term Memory (LSTM) networks for training surface\nelectromyography pattern recognition models (sEMG-PR) using dynamic data with\ntransitions. While labeling such data poses challenges due to the absence of\nground-truth labels during transitions between classes, self-supervised\npre-training offers a way to circumvent this issue. We compare the performance\nof LSTMs trained with either fully-supervised or self-supervised loss to a\nconventional non-temporal model (LDA) on two data types: segmented ramp data\n(lacking transition information) and continuous dynamic data inclusive of class\ntransitions. Statistical analysis reveals that the temporal models outperform\nnon-temporal models when trained with continuous dynamic data. Additionally,\nthe proposed VICReg pre-trained temporal model with continuous dynamic data\nsignificantly outperformed all other models. Interestingly, when using only\nramp data, the LSTM performed worse than the LDA, suggesting potential\noverfitting due to the absence of sufficient dynamics. This highlights the\ninterplay between data type and model choice. Overall, this work highlights the\nimportance of representative dynamics in training data and the potential for\nleveraging self-supervised approaches to enhance sEMG-PR models.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":"54 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-Supervised Learning via VICReg Enables Training of EMG Pattern Recognition Using Continuous Data with Unclear Labels\",\"authors\":\"Shriram Tallam Puranam Raghu, Dawn T. MacIsaac, Erik J. Scheme\",\"doi\":\"arxiv-2409.11632\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study, we investigate the application of self-supervised learning via\\npre-trained Long Short-Term Memory (LSTM) networks for training surface\\nelectromyography pattern recognition models (sEMG-PR) using dynamic data with\\ntransitions. While labeling such data poses challenges due to the absence of\\nground-truth labels during transitions between classes, self-supervised\\npre-training offers a way to circumvent this issue. We compare the performance\\nof LSTMs trained with either fully-supervised or self-supervised loss to a\\nconventional non-temporal model (LDA) on two data types: segmented ramp data\\n(lacking transition information) and continuous dynamic data inclusive of class\\ntransitions. Statistical analysis reveals that the temporal models outperform\\nnon-temporal models when trained with continuous dynamic data. Additionally,\\nthe proposed VICReg pre-trained temporal model with continuous dynamic data\\nsignificantly outperformed all other models. Interestingly, when using only\\nramp data, the LSTM performed worse than the LDA, suggesting potential\\noverfitting due to the absence of sufficient dynamics. This highlights the\\ninterplay between data type and model choice. Overall, this work highlights the\\nimportance of representative dynamics in training data and the potential for\\nleveraging self-supervised approaches to enhance sEMG-PR models.\",\"PeriodicalId\":501034,\"journal\":{\"name\":\"arXiv - EE - Signal Processing\",\"volume\":\"54 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11632\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11632","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在这项研究中,我们研究了自监督学习的应用,即利用带有过渡的动态数据,通过预先训练的长短期记忆(LSTM)网络来训练表面肌电图模式识别模型(sEMG-PR)。由于在类之间的转换过程中缺乏地面真实标签,给这类数据贴标签带来了挑战,而自我监督预训练则提供了一种规避这一问题的方法。我们比较了使用完全监督或自我监督损失训练的 LSTM 与传统非时态模型(LDA)在两种数据类型上的性能:分段斜坡数据(缺乏过渡信息)和包含类别过渡的连续动态数据。统计分析显示,使用连续动态数据训练时,时态模型优于非时态模型。此外,建议的 VICReg 预训练时态模型在使用连续动态数据时的表现明显优于所有其他模型。有趣的是,当仅使用斜坡数据时,LSTM 的表现不如 LDA,这表明由于缺乏足够的动态性,可能会出现拟合过度。这凸显了数据类型与模型选择之间的相互作用。总之,这项工作强调了训练数据中代表性动态的重要性,以及利用自我监督方法增强 sEMG-PR 模型的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Self-Supervised Learning via VICReg Enables Training of EMG Pattern Recognition Using Continuous Data with Unclear Labels
In this study, we investigate the application of self-supervised learning via pre-trained Long Short-Term Memory (LSTM) networks for training surface electromyography pattern recognition models (sEMG-PR) using dynamic data with transitions. While labeling such data poses challenges due to the absence of ground-truth labels during transitions between classes, self-supervised pre-training offers a way to circumvent this issue. We compare the performance of LSTMs trained with either fully-supervised or self-supervised loss to a conventional non-temporal model (LDA) on two data types: segmented ramp data (lacking transition information) and continuous dynamic data inclusive of class transitions. Statistical analysis reveals that the temporal models outperform non-temporal models when trained with continuous dynamic data. Additionally, the proposed VICReg pre-trained temporal model with continuous dynamic data significantly outperformed all other models. Interestingly, when using only ramp data, the LSTM performed worse than the LDA, suggesting potential overfitting due to the absence of sufficient dynamics. This highlights the interplay between data type and model choice. Overall, this work highlights the importance of representative dynamics in training data and the potential for leveraging self-supervised approaches to enhance sEMG-PR models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Blind Deconvolution on Graphs: Exact and Stable Recovery End-to-End Learning of Transmitter and Receiver Filters in Bandwidth Limited Fiber Optic Communication Systems Atmospheric Turbulence-Immune Free Space Optical Communication System based on Discrete-Time Analog Transmission User Subgrouping in Scalable Cell-Free Massive MIMO Multicasting Systems Covert Communications Without Pre-Sharing of Side Information and Channel Estimation Over Quasi-Static Fading Channels
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1