{"title":"Synchronous Dynamic View Learning: A Framework for Autonomous Training of Activity Recognition Models Using Wearable Sensors","authors":"Seyed Ali Rokni, Hassan Ghasemzadeh","doi":"10.1145/3055031.3055087","DOIUrl":null,"url":null,"abstract":"Wearable technologies play a central role in human-centered Internet-of-Things applications. Wearables leverage machine learning algorithms to detect events of interest such as physical activities and medical complications. These algorithms, however, need to be retrained upon any changes in configuration of the system, such as addition/ removal of a sensor to/ from the network or displacement/ misplacement/ mis-orientation of the physical sensors on the body. We challenge this retraining model by stimulating the vision of autonomous learning with the goal of eliminating the labor-intensive, time-consuming, and highly expensive process of collecting labeled training data in dynamic environments. We propose an approach for autonomous retraining of the machine learning algorithms in real-time without need for any new labeled training data. We focus on a dynamic setting where new sensors are added to the system and worn on various body locations. We capture the inherent correlation between observations made by a static sensor view for which trained algorithms exist and the new dynamic sensor views for which an algorithm needs to be developed. By applying our real-time dynamic-view autonomous learning approach, we achieve an average accuracy of 81.1% in activity recognition using three experimental datasets. This amount of accuracy represents more than 13.8% improvement in the accuracy due to the automatic labeling of the sensor data in the newly added sensor. This performance is only 11.2% lower than the experimental upper bound where labeled training data are collected with the new sensor.","PeriodicalId":228318,"journal":{"name":"2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 16th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3055031.3055087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 31
Abstract
Wearable technologies play a central role in human-centered Internet-of-Things applications. Wearables leverage machine learning algorithms to detect events of interest such as physical activities and medical complications. These algorithms, however, need to be retrained upon any changes in configuration of the system, such as addition/ removal of a sensor to/ from the network or displacement/ misplacement/ mis-orientation of the physical sensors on the body. We challenge this retraining model by stimulating the vision of autonomous learning with the goal of eliminating the labor-intensive, time-consuming, and highly expensive process of collecting labeled training data in dynamic environments. We propose an approach for autonomous retraining of the machine learning algorithms in real-time without need for any new labeled training data. We focus on a dynamic setting where new sensors are added to the system and worn on various body locations. We capture the inherent correlation between observations made by a static sensor view for which trained algorithms exist and the new dynamic sensor views for which an algorithm needs to be developed. By applying our real-time dynamic-view autonomous learning approach, we achieve an average accuracy of 81.1% in activity recognition using three experimental datasets. This amount of accuracy represents more than 13.8% improvement in the accuracy due to the automatic labeling of the sensor data in the newly added sensor. This performance is only 11.2% lower than the experimental upper bound where labeled training data are collected with the new sensor.