Predicting "About-to-Eat" Moments for Just-in-Time Eating Intervention

Tauhidur Rahman, M. Czerwinski, Ran Gilad-Bachrach, Paul Johns
{"title":"Predicting \"About-to-Eat\" Moments for Just-in-Time Eating Intervention","authors":"Tauhidur Rahman, M. Czerwinski, Ran Gilad-Bachrach, Paul Johns","doi":"10.1145/2896338.2896359","DOIUrl":null,"url":null,"abstract":"Various wearable sensors capturing body vibration, jaw movement, hand gesture, etc., have shown promise in detecting when one is currently eating. However, based on existing literature and user surveys conducted in this study, we argue that a Just-in-Time eating intervention, triggered upon detecting a current eating event, is sub-optimal. An eating intervention triggered at \"About-to-Eat\" moments could provide users with a further opportunity to adopt a better and healthier eating behavior. In this work, we present a wearable sensing framework that predicts \"About-to-Eat\" moments and the \"Time until the Next Eating Event\". The wearable sensing framework consists of an array of sensors that capture physical activity, location, heart rate, electrodermal activity, skin temperature and caloric expenditure. Using signal processing and machine learning on this raw multimodal sensor stream, we train an \"About-to-Eat\" moment classifier that reaches an average recall of 77%. The \"Time until the Next Eating Event\" regression model attains a correlation coefficient of 0.49. Personalization further increases the performance of both of the models to an average recall of 85% and correlation coefficient of 0.65. The contributions of this paper include user surveys related to this problem, the design of a system to predict about to eat moments and a regression model used to train multimodal sensory data in real time for potential eating interventions for the user.","PeriodicalId":146447,"journal":{"name":"Proceedings of the 6th International Conference on Digital Health Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"59","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Digital Health Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2896338.2896359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 59

Abstract

Various wearable sensors capturing body vibration, jaw movement, hand gesture, etc., have shown promise in detecting when one is currently eating. However, based on existing literature and user surveys conducted in this study, we argue that a Just-in-Time eating intervention, triggered upon detecting a current eating event, is sub-optimal. An eating intervention triggered at "About-to-Eat" moments could provide users with a further opportunity to adopt a better and healthier eating behavior. In this work, we present a wearable sensing framework that predicts "About-to-Eat" moments and the "Time until the Next Eating Event". The wearable sensing framework consists of an array of sensors that capture physical activity, location, heart rate, electrodermal activity, skin temperature and caloric expenditure. Using signal processing and machine learning on this raw multimodal sensor stream, we train an "About-to-Eat" moment classifier that reaches an average recall of 77%. The "Time until the Next Eating Event" regression model attains a correlation coefficient of 0.49. Personalization further increases the performance of both of the models to an average recall of 85% and correlation coefficient of 0.65. The contributions of this paper include user surveys related to this problem, the design of a system to predict about to eat moments and a regression model used to train multimodal sensory data in real time for potential eating interventions for the user.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
预测“即将吃饭”的时刻,及时饮食干预
各种可穿戴传感器捕捉身体振动、下巴运动、手势等,在检测一个人何时正在吃东西方面显示出了希望。然而,根据现有文献和本研究中进行的用户调查,我们认为,在检测到当前的饮食事件时触发的即时饮食干预是次优的。在“即将进食”时刻触发的饮食干预可以为用户提供进一步的机会,让他们养成更好、更健康的饮食习惯。在这项工作中,我们提出了一个可穿戴传感框架,预测“即将吃饭”的时刻和“到下一次吃饭事件的时间”。可穿戴传感框架由一系列传感器组成,可捕获身体活动、位置、心率、皮肤电活动、皮肤温度和热量消耗。在这个原始的多模态传感器流上使用信号处理和机器学习,我们训练了一个“即将吃饭”的时刻分类器,其平均召回率达到77%。“下一次进食前的时间”回归模型的相关系数为0.49。个性化进一步提高了两个模型的性能,平均召回率为85%,相关系数为0.65。本文的贡献包括与该问题相关的用户调查,预测即将进食时刻的系统设计以及用于实时训练多模态感官数据的回归模型,以为用户提供潜在的进食干预。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Ubiquitous Bugs and Drugs Education for Children Through Mobile Games On Infectious Intestinal Disease Surveillance using Social Media Content Extracting Signals from Social Media for Chronic Disease Surveillance Emotional Virtual Agent to Improve Ageing in Place with Technology VAC Medi+board: Analysing Vaccine Rumours in News and Social Media
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1