Synchronous Dynamic View Learning: A Framework for Autonomous Training of Activity Recognition Models Using Wearable Sensors

Seyed Ali Rokni, Hassan Ghasemzadeh
{"title":"Synchronous Dynamic View Learning: A Framework for Autonomous Training of Activity Recognition Models Using Wearable Sensors","authors":"Seyed Ali Rokni, Hassan Ghasemzadeh","doi":"10.1145/3055031.3055087","DOIUrl":null,"url":null,"abstract":"Wearable technologies play a central role in human-centered Internet-of-Things applications. Wearables leverage machine learning algorithms to detect events of interest such as physical activities and medical complications. These algorithms, however, need to be retrained upon any changes in configuration of the system, such as addition/ removal of a sensor to/ from the network or displacement/ misplacement/ mis-orientation of the physical sensors on the body. We challenge this retraining model by stimulating the vision of autonomous learning with the goal of eliminating the labor-intensive, time-consuming, and highly expensive process of collecting labeled training data in dynamic environments. We propose an approach for autonomous retraining of the machine learning algorithms in real-time without need for any new labeled training data. We focus on a dynamic setting where new sensors are added to the system and worn on various body locations. We capture the inherent correlation between observations made by a static sensor view for which trained algorithms exist and the new dynamic sensor views for which an algorithm needs to be developed. By applying our real-time dynamic-view autonomous learning approach, we achieve an average accuracy of 81.1% in activity recognition using three experimental datasets. This amount of accuracy represents more than 13.8% improvement in the accuracy due to the automatic labeling of the sensor data in the newly added sensor. This performance is only 11.2% lower than the experimental upper bound where labeled training data are collected with the new sensor.","PeriodicalId":228318,"journal":{"name":"2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3055031.3055087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 31

Abstract

Wearable technologies play a central role in human-centered Internet-of-Things applications. Wearables leverage machine learning algorithms to detect events of interest such as physical activities and medical complications. These algorithms, however, need to be retrained upon any changes in configuration of the system, such as addition/ removal of a sensor to/ from the network or displacement/ misplacement/ mis-orientation of the physical sensors on the body. We challenge this retraining model by stimulating the vision of autonomous learning with the goal of eliminating the labor-intensive, time-consuming, and highly expensive process of collecting labeled training data in dynamic environments. We propose an approach for autonomous retraining of the machine learning algorithms in real-time without need for any new labeled training data. We focus on a dynamic setting where new sensors are added to the system and worn on various body locations. We capture the inherent correlation between observations made by a static sensor view for which trained algorithms exist and the new dynamic sensor views for which an algorithm needs to be developed. By applying our real-time dynamic-view autonomous learning approach, we achieve an average accuracy of 81.1% in activity recognition using three experimental datasets. This amount of accuracy represents more than 13.8% improvement in the accuracy due to the automatic labeling of the sensor data in the newly added sensor. This performance is only 11.2% lower than the experimental upper bound where labeled training data are collected with the new sensor.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
同步动态视图学习:基于可穿戴传感器的活动识别模型自主训练框架
可穿戴技术在以人为中心的物联网应用中发挥着核心作用。可穿戴设备利用机器学习算法来检测感兴趣的事件,如身体活动和医疗并发症。然而,这些算法需要根据系统配置的任何变化进行重新训练,例如向网络中添加/移除传感器或身体上的物理传感器的位移/错位/错误方向。我们通过激发自主学习的愿景来挑战这种再训练模型,其目标是消除在动态环境中收集标记训练数据的劳动密集型、耗时和高度昂贵的过程。我们提出了一种机器学习算法的实时自主再训练方法,而不需要任何新的标记训练数据。我们专注于动态设置,将新的传感器添加到系统中,并佩戴在不同的身体位置。我们捕获了静态传感器视图(其中存在训练算法)和需要开发算法的新动态传感器视图(其中需要开发算法)所做观察之间的内在相关性。通过应用我们的实时动态视图自主学习方法,我们在三个实验数据集上的活动识别平均准确率达到81.1%。由于在新添加的传感器中自动标记传感器数据,这种精度表示精度提高了13.8%以上。该性能仅比使用新传感器收集标记训练数据的实验上界低11.2%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Coresets for Differentially Private K-Means Clustering and Applications to Privacy in Mobile Sensor Networks SurfaceVibe: Vibration-Based Tap & Swipe Tracking on Ubiquitous Surfaces 3D Through-Wall Imaging with Unmanned Aerial Vehicles Using WiFi MinHash Hierarchy for Privacy Preserving Trajectory Sensing and Query VideoMec: A Metadata-Enhanced Crowdsourcing System for Mobile Videos
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1