Tauhidur Rahman, M. Czerwinski, Ran Gilad-Bachrach, Paul Johns
{"title":"Predicting \"About-to-Eat\" Moments for Just-in-Time Eating Intervention","authors":"Tauhidur Rahman, M. Czerwinski, Ran Gilad-Bachrach, Paul Johns","doi":"10.1145/2896338.2896359","DOIUrl":null,"url":null,"abstract":"Various wearable sensors capturing body vibration, jaw movement, hand gesture, etc., have shown promise in detecting when one is currently eating. However, based on existing literature and user surveys conducted in this study, we argue that a Just-in-Time eating intervention, triggered upon detecting a current eating event, is sub-optimal. An eating intervention triggered at \"About-to-Eat\" moments could provide users with a further opportunity to adopt a better and healthier eating behavior. In this work, we present a wearable sensing framework that predicts \"About-to-Eat\" moments and the \"Time until the Next Eating Event\". The wearable sensing framework consists of an array of sensors that capture physical activity, location, heart rate, electrodermal activity, skin temperature and caloric expenditure. Using signal processing and machine learning on this raw multimodal sensor stream, we train an \"About-to-Eat\" moment classifier that reaches an average recall of 77%. The \"Time until the Next Eating Event\" regression model attains a correlation coefficient of 0.49. Personalization further increases the performance of both of the models to an average recall of 85% and correlation coefficient of 0.65. The contributions of this paper include user surveys related to this problem, the design of a system to predict about to eat moments and a regression model used to train multimodal sensory data in real time for potential eating interventions for the user.","PeriodicalId":146447,"journal":{"name":"Proceedings of the 6th International Conference on Digital Health Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"59","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Digital Health Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2896338.2896359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 59
Abstract
Various wearable sensors capturing body vibration, jaw movement, hand gesture, etc., have shown promise in detecting when one is currently eating. However, based on existing literature and user surveys conducted in this study, we argue that a Just-in-Time eating intervention, triggered upon detecting a current eating event, is sub-optimal. An eating intervention triggered at "About-to-Eat" moments could provide users with a further opportunity to adopt a better and healthier eating behavior. In this work, we present a wearable sensing framework that predicts "About-to-Eat" moments and the "Time until the Next Eating Event". The wearable sensing framework consists of an array of sensors that capture physical activity, location, heart rate, electrodermal activity, skin temperature and caloric expenditure. Using signal processing and machine learning on this raw multimodal sensor stream, we train an "About-to-Eat" moment classifier that reaches an average recall of 77%. The "Time until the Next Eating Event" regression model attains a correlation coefficient of 0.49. Personalization further increases the performance of both of the models to an average recall of 85% and correlation coefficient of 0.65. The contributions of this paper include user surveys related to this problem, the design of a system to predict about to eat moments and a regression model used to train multimodal sensory data in real time for potential eating interventions for the user.