{"title":"Real time monitoring and recognition of eating and physical activity with a wearable device connected to the eyeglass","authors":"Muhammad Farooq, E. Sazonov","doi":"10.1109/ICSENST.2017.8304420","DOIUrl":null,"url":null,"abstract":"Motion artifacts and speech have been found to degrade the accuracy of wearable device used for detection and recognition of food intake. Thus, there is a need to investigate and develop systems which are impervious to these artifacts. For these systems to be practical in daily living, it is necessary to evaluate their ability to monitor food intake in real-time. This study presents results of real-time testing of a wearable device for real-time classification of multiclass activities. The device consists of a sensor for chewing detection (piezoelectric film sensor) and an accelerometer for physical activity monitoring. The device is in the form of eyeglasses. The strain sensor is attached to the temporalis muscle for chewing detection. Ten participants tested the system while performing activities including eating at rest, talking, walking and eating while walking. For 5-second epochs, ten features were extracted from both sensor signals. A communication protocol was implemented where sensor data were uploaded to a remote server for real-time data processing. Data processing was performed in two steps. In the first step, a multiclass decision tree model was trained offline with data from seven participants to differentiate among eating/chewing and non-eating and two levels of physical activity (sedentary and physically active). In the second step, the trained model was used on remaining three participants to predict the activity label in real-time. Offline classification and real-time online classification achieved average F1-scores of 93.15% and 94.65% respectively. These results indicate that the device can accurately differentiate between epochs of eating and non-eating as well as epochs of two different physical activity levels; in real-time.","PeriodicalId":289209,"journal":{"name":"2017 Eleventh International Conference on Sensing Technology (ICST)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Eleventh International Conference on Sensing Technology (ICST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSENST.2017.8304420","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
Motion artifacts and speech have been found to degrade the accuracy of wearable device used for detection and recognition of food intake. Thus, there is a need to investigate and develop systems which are impervious to these artifacts. For these systems to be practical in daily living, it is necessary to evaluate their ability to monitor food intake in real-time. This study presents results of real-time testing of a wearable device for real-time classification of multiclass activities. The device consists of a sensor for chewing detection (piezoelectric film sensor) and an accelerometer for physical activity monitoring. The device is in the form of eyeglasses. The strain sensor is attached to the temporalis muscle for chewing detection. Ten participants tested the system while performing activities including eating at rest, talking, walking and eating while walking. For 5-second epochs, ten features were extracted from both sensor signals. A communication protocol was implemented where sensor data were uploaded to a remote server for real-time data processing. Data processing was performed in two steps. In the first step, a multiclass decision tree model was trained offline with data from seven participants to differentiate among eating/chewing and non-eating and two levels of physical activity (sedentary and physically active). In the second step, the trained model was used on remaining three participants to predict the activity label in real-time. Offline classification and real-time online classification achieved average F1-scores of 93.15% and 94.65% respectively. These results indicate that the device can accurately differentiate between epochs of eating and non-eating as well as epochs of two different physical activity levels; in real-time.