Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917586
Yaqian Xu, I. Hübener, Ann-Kathrin Seipp, Sandra Ohly, K. David
The recognition of human emotions using physiological signals such as Electrodermal Activity (EDA), Electrocardiogram (ECG) or Electromyography (EMG), has been extensively researched in the past attracting a lot of interest during the last few decades. Although showing a relatively satisfactory performance under lab conditions, Emotion Recognition (ER) systems using physiological signals are not widely used in real-world scenarios. One important fact is that, in the real world, physiological signals may be influenced by human movement and therefore, they cannot be used as a unique indicative of emotions. In this paper, we investigate the influence of human movement on ER using physiological signals. We compare different measures of emotion before and after a test person has performed some physical activity (e.g. walking, going upstairs). We discuss the main differences between recognizing emotions in the lab and the real world and provide new insights into the development of ER systems in real-world scenarios.
{"title":"From the lab to the real-world: An investigation on the influence of human movement on Emotion Recognition using physiological signals","authors":"Yaqian Xu, I. Hübener, Ann-Kathrin Seipp, Sandra Ohly, K. David","doi":"10.1109/PERCOMW.2017.7917586","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917586","url":null,"abstract":"The recognition of human emotions using physiological signals such as Electrodermal Activity (EDA), Electrocardiogram (ECG) or Electromyography (EMG), has been extensively researched in the past attracting a lot of interest during the last few decades. Although showing a relatively satisfactory performance under lab conditions, Emotion Recognition (ER) systems using physiological signals are not widely used in real-world scenarios. One important fact is that, in the real world, physiological signals may be influenced by human movement and therefore, they cannot be used as a unique indicative of emotions. In this paper, we investigate the influence of human movement on ER using physiological signals. We compare different measures of emotion before and after a test person has performed some physical activity (e.g. walking, going upstairs). We discuss the main differences between recognizing emotions in the lab and the real world and provide new insights into the development of ER systems in real-world scenarios.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117235105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917531
Bo Zhou, G. Bahle, Lorenzo Furg, Monit Shah Singh, H. Cruz, P. Lukowicz
In this demonstrator, we present Trainwear, a wearable garment that utilizes fabric pressure sensing for sport exercise activity recognition and feedback. The shirt emphasizes on design for public users using our developed sensing technology. A video of the demo is linked at the end of this technical paper.
{"title":"Trainwear: A real-time assisted training feedback system with fabric wearable sensors","authors":"Bo Zhou, G. Bahle, Lorenzo Furg, Monit Shah Singh, H. Cruz, P. Lukowicz","doi":"10.1109/PERCOMW.2017.7917531","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917531","url":null,"abstract":"In this demonstrator, we present Trainwear, a wearable garment that utilizes fabric pressure sensing for sport exercise activity recognition and feedback. The shirt emphasizes on design for public users using our developed sensing technology. A video of the demo is linked at the end of this technical paper.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114173355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917545
Frank Krüger, Christina Heine, Sebastian Bader, Albert Hein, S. Teipel, T. Kirste
The annotation of human activity is a crucial prerequisite for applying methods of supervised machine learning. It is typically either obtained by live annotation by the participant or by video log analysis afterwards. Both methods, however, suffer from disadvantages when applied in dementia related nursing homes. On the one hand, people suffering from dementia are not able to produce such annotation and on the other hand, video observation requires high technical effort. The research domain of quality of care addresses these issues by providing observation tools that allow the simultaneous live observation of up to eight participants - dementia care mapping (DCM). We developed an annotation scheme based on the popular clinical observation tool DCM to obtain annotation about challenging behaviours. In this paper, we report our experiences with this approach and discuss the applicability of clinical observation tools in the domain of automatic human activity assessment.
{"title":"On the applicability of clinical observation tools for human activity annotation","authors":"Frank Krüger, Christina Heine, Sebastian Bader, Albert Hein, S. Teipel, T. Kirste","doi":"10.1109/PERCOMW.2017.7917545","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917545","url":null,"abstract":"The annotation of human activity is a crucial prerequisite for applying methods of supervised machine learning. It is typically either obtained by live annotation by the participant or by video log analysis afterwards. Both methods, however, suffer from disadvantages when applied in dementia related nursing homes. On the one hand, people suffering from dementia are not able to produce such annotation and on the other hand, video observation requires high technical effort. The research domain of quality of care addresses these issues by providing observation tools that allow the simultaneous live observation of up to eight participants - dementia care mapping (DCM). We developed an annotation scheme based on the popular clinical observation tool DCM to obtain annotation about challenging behaviours. In this paper, we report our experiences with this approach and discuss the applicability of clinical observation tools in the domain of automatic human activity assessment.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"219 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116162284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917569
S. Aminikhanghahi, D. Cook
Real time detection of transitions between activities based on sensor data is a valuable but somewhat untapped challenge. Detecting these transitions is useful for activity segmentation, for timing notifications or interventions, and for analyzing human behavior. In this work, we design and evaluate real time machine learning-based methods for automatic segmentation and recognition of continuous human daily activity. We detect activity transitions and integrate the change point detection algorithm with smart home activity recognition to segment human daily activities into separate actions and correctly identify each action. Experiments with on real-world smart home datasets suggest that using transition aware activity recognition algorithms lead to best performance for detecting activity boundaries and streaming activity segmentation.
{"title":"Using change point detection to automate daily activity segmentation","authors":"S. Aminikhanghahi, D. Cook","doi":"10.1109/PERCOMW.2017.7917569","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917569","url":null,"abstract":"Real time detection of transitions between activities based on sensor data is a valuable but somewhat untapped challenge. Detecting these transitions is useful for activity segmentation, for timing notifications or interventions, and for analyzing human behavior. In this work, we design and evaluate real time machine learning-based methods for automatic segmentation and recognition of continuous human daily activity. We detect activity transitions and integrate the change point detection algorithm with smart home activity recognition to segment human daily activities into separate actions and correctly identify each action. Experiments with on real-world smart home datasets suggest that using transition aware activity recognition algorithms lead to best performance for detecting activity boundaries and streaming activity segmentation.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127583017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917587
Cedric Konan, H. Suwa, Yutaka Arakawa, K. Yasumoto
This paper presents a study about estimating the emotions conveyed in clips of background music (BGM) to be used in an automatic slideshow creation system. The system we aimed to develop, automatically tags each given pieces of background music with the main emotion it conveys, in order to recommend the most suitable music clip to the slideshow creators, based on the main emotions of embedded photos. As a first step of our research, we developed a machine learning model to estimate the emotions conveyed in a music clip and achieved 88% classification accuracy with cross-validation technique. The second part of our work involved developing a web application using Microsoft Emotion API to determine the emotions in photos, so the system can find the best candidate music for each photo in the slideshow. 16 users rated the recommended background music for a set of photos using a 5-point likert scale and we achieved an average rate of 4.1, 3.6 and 3.0 for the photo sets 1, 2, and 3 respectively of our evaluation task.
{"title":"EmoBGM: Estimating sound's emotion for creating slideshows with suitable BGM","authors":"Cedric Konan, H. Suwa, Yutaka Arakawa, K. Yasumoto","doi":"10.1109/PERCOMW.2017.7917587","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917587","url":null,"abstract":"This paper presents a study about estimating the emotions conveyed in clips of background music (BGM) to be used in an automatic slideshow creation system. The system we aimed to develop, automatically tags each given pieces of background music with the main emotion it conveys, in order to recommend the most suitable music clip to the slideshow creators, based on the main emotions of embedded photos. As a first step of our research, we developed a machine learning model to estimate the emotions conveyed in a music clip and achieved 88% classification accuracy with cross-validation technique. The second part of our work involved developing a web application using Microsoft Emotion API to determine the emotions in photos, so the system can find the best candidate music for each photo in the slideshow. 16 users rated the recommended background music for a set of photos using a 5-point likert scale and we achieved an average rate of 4.1, 3.6 and 3.0 for the photo sets 1, 2, and 3 respectively of our evaluation task.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122768820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917634
A. Dorri, S. Kanhere, R. Jurdak, Praveen Gauravaram
Internet of Things (IoT) security and privacy remain a major challenge, mainly due to the massive scale and distributed nature of IoT networks. Blockchain-based approaches provide decentralized security and privacy, yet they involve significant energy, delay, and computational overhead that is not suitable for most resource-constrained IoT devices. In our previous work, we presented a lightweight instantiation of a BC particularly geared for use in IoT by eliminating the Proof of Work (POW) and the concept of coins. Our approach was exemplified in a smart home setting and consists of three main tiers namely: cloud storage, overlay, and smart home. In this paper we delve deeper and outline the various core components and functions of the smart home tier. Each smart home is equipped with an always online, high resource device, known as “miner” that is responsible for handling all communication within and external to the home. The miner also preserves a private and secure BC, used for controlling and auditing communications. We show that our proposed BC-based smart home framework is secure by thoroughly analysing its security with respect to the fundamental security goals of confidentiality, integrity, and availability. Finally, we present simulation results to highlight that the overheads (in terms of traffic, processing time and energy consumption) introduced by our approach are insignificant relative to its security and privacy gains.
{"title":"Blockchain for IoT security and privacy: The case study of a smart home","authors":"A. Dorri, S. Kanhere, R. Jurdak, Praveen Gauravaram","doi":"10.1109/PERCOMW.2017.7917634","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917634","url":null,"abstract":"Internet of Things (IoT) security and privacy remain a major challenge, mainly due to the massive scale and distributed nature of IoT networks. Blockchain-based approaches provide decentralized security and privacy, yet they involve significant energy, delay, and computational overhead that is not suitable for most resource-constrained IoT devices. In our previous work, we presented a lightweight instantiation of a BC particularly geared for use in IoT by eliminating the Proof of Work (POW) and the concept of coins. Our approach was exemplified in a smart home setting and consists of three main tiers namely: cloud storage, overlay, and smart home. In this paper we delve deeper and outline the various core components and functions of the smart home tier. Each smart home is equipped with an always online, high resource device, known as “miner” that is responsible for handling all communication within and external to the home. The miner also preserves a private and secure BC, used for controlling and auditing communications. We show that our proposed BC-based smart home framework is secure by thoroughly analysing its security with respect to the fundamental security goals of confidentiality, integrity, and availability. Finally, we present simulation results to highlight that the overheads (in terms of traffic, processing time and energy consumption) introduced by our approach are insignificant relative to its security and privacy gains.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131466820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917574
Tingting Zhu, Hai Wang, Haikun Wei
Data centers have been widely used in many pervasive computing applications. This paper shows the cost-aware virtual machine allocation problem for off-grid green data centers is an integer programming problem which is NP-hard. The paper presents a cost-aware virtual machine allocation algorithm which attempts to utilize renewable energy sources and minimize the energy cost of fossil fuel while maintaining the quality-of-service requirements of the tasks. The simulation results show that the proposed algorithm is sensitive to the changes in the price of fossil fuel, and is able to achieve scalable performance.
{"title":"Cost-aware virtual machine allocation for off-grid green data centers","authors":"Tingting Zhu, Hai Wang, Haikun Wei","doi":"10.1109/PERCOMW.2017.7917574","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917574","url":null,"abstract":"Data centers have been widely used in many pervasive computing applications. This paper shows the cost-aware virtual machine allocation problem for off-grid green data centers is an integer programming problem which is NP-hard. The paper presents a cost-aware virtual machine allocation algorithm which attempts to utilize renewable energy sources and minimize the energy cost of fossil fuel while maintaining the quality-of-service requirements of the tasks. The simulation results show that the proposed algorithm is sensitive to the changes in the price of fossil fuel, and is able to achieve scalable performance.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129556386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917572
Agnes Grünerbl, G. Bahle, P. Lukowicz
This paper investigates the problem of recognizing activities and dynamic ad-hoc collaboration involving multiple users. Thus, we consider people performing various predominantly physical, compound activities in a smart environment (which includes personal/wearable devices). In this case, being “compound” means that the activity can be decomposed into primitive (atomic) actions that are executed by individual users. We investigate how noisy recognition of the atomic actions of individual users can be used to identify instances of cooperation at the level of the compound activities. To this end, we first introduce a hierarchical tree plan library model for activity representation. Using this new model we developed an algorithm, which allows detecting of ad-hoc team interaction without any further knowledge about roles or preliminary designed tasks. We evaluate the model and algorithm “post-mortem” with data extracted from video footage of a real nurse-emergency-training session and with increasing difficulties by artificially adding recognition-errors.
{"title":"Detecting spontaneous collaboration in dynamic group activities from noisy individual activity data","authors":"Agnes Grünerbl, G. Bahle, P. Lukowicz","doi":"10.1109/PERCOMW.2017.7917572","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917572","url":null,"abstract":"This paper investigates the problem of recognizing activities and dynamic ad-hoc collaboration involving multiple users. Thus, we consider people performing various predominantly physical, compound activities in a smart environment (which includes personal/wearable devices). In this case, being “compound” means that the activity can be decomposed into primitive (atomic) actions that are executed by individual users. We investigate how noisy recognition of the atomic actions of individual users can be used to identify instances of cooperation at the level of the compound activities. To this end, we first introduce a hierarchical tree plan library model for activity representation. Using this new model we developed an algorithm, which allows detecting of ad-hoc team interaction without any further knowledge about roles or preliminary designed tasks. We evaluate the model and algorithm “post-mortem” with data extracted from video footage of a real nurse-emergency-training session and with increasing difficulties by artificially adding recognition-errors.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128651346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917570
Jamie A. Ward, Gerald Pirkl, Peter Hevesi, P. Lukowicz
This paper presents a method of using wearable accelerometers and microphones to detect instances of ad-hoc physical collaborations between members of a group. 4 people are instructed to construct a large video wall and must cooperate to complete the task. The task is loosely structured with minimal outside assistance to better reflect the ad-hoc nature of many real world construction scenarios. Audio data, recorded from chest-worn microphones, is used to reveal information on collocation, i.e. whether or not participants are near one another. Movement data, recorded using 3-axis accelerometers worn on each person's head and wrists, is used to provide information on correlated movements, such as when participants help one another to lift a heavy object. Collocation and correlated movement information is then combined to determine who is working together at any given time. The work shows how data from commonly available sensors can be combined across multiple people using a simple, low power algorithm to detect a range of physical collaborations.
{"title":"Detecting physical collaborations in a group task using body-worn microphones and accelerometers","authors":"Jamie A. Ward, Gerald Pirkl, Peter Hevesi, P. Lukowicz","doi":"10.1109/PERCOMW.2017.7917570","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917570","url":null,"abstract":"This paper presents a method of using wearable accelerometers and microphones to detect instances of ad-hoc physical collaborations between members of a group. 4 people are instructed to construct a large video wall and must cooperate to complete the task. The task is loosely structured with minimal outside assistance to better reflect the ad-hoc nature of many real world construction scenarios. Audio data, recorded from chest-worn microphones, is used to reveal information on collocation, i.e. whether or not participants are near one another. Movement data, recorded using 3-axis accelerometers worn on each person's head and wrists, is used to provide information on correlated movements, such as when participants help one another to lift a heavy object. Collocation and correlated movement information is then combined to determine who is working together at any given time. The work shows how data from commonly available sensors can be combined across multiple people using a simple, low power algorithm to detect a range of physical collaborations.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134549617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-13DOI: 10.1109/PERCOMW.2017.7917596
Dzung T. Nguyen, Eli Cohen, M. Pourhomayoun, N. Alshurafa
Passively detecting and counting the number of swallows in food intake enables accurate detection of eating episodes in free-living participants, and aids in characterizing eating episodes. On average, the more food consumed, the greater the number of swallows; and swallows have been shown to positively correlate with caloric intake. While passive sensing measures have shown promise in recent years, they are yet to be used reliably to detect eating, impeding the development of timely intervention delivery that change poor eating behavior. This paper presents a novel integrated wearable necklace that comprises two piezoelectric sensors vertically positioned around the neck, an inertial motion unit, and long short-term memory (LSTM) neural networks to detect and count swallows. A unique correlation of derivative features creates candidate swallows. To reduce the FPR features are extracted using symmetric and asymmetric windows surrounding each candidate swallow to feed into a Random Forest classifier. Independently, a LSTM network is trained from raw data using automated feature learning methods. In an in-lab study comprising confounding activities of 10 participants, results show a 3.34 RMSE of swallow count using LSTM, and a 76.07% average F-measure of swallows, outperforming the Random Forest classifier. This system thus shows promise in accurately detecting and characterizing eating patterns, enabling passive detection of swallow count, and paving the way for timely interventions to prevent problematic eating.
{"title":"SwallowNet: Recurrent neural network detects and characterizes eating patterns","authors":"Dzung T. Nguyen, Eli Cohen, M. Pourhomayoun, N. Alshurafa","doi":"10.1109/PERCOMW.2017.7917596","DOIUrl":"https://doi.org/10.1109/PERCOMW.2017.7917596","url":null,"abstract":"Passively detecting and counting the number of swallows in food intake enables accurate detection of eating episodes in free-living participants, and aids in characterizing eating episodes. On average, the more food consumed, the greater the number of swallows; and swallows have been shown to positively correlate with caloric intake. While passive sensing measures have shown promise in recent years, they are yet to be used reliably to detect eating, impeding the development of timely intervention delivery that change poor eating behavior. This paper presents a novel integrated wearable necklace that comprises two piezoelectric sensors vertically positioned around the neck, an inertial motion unit, and long short-term memory (LSTM) neural networks to detect and count swallows. A unique correlation of derivative features creates candidate swallows. To reduce the FPR features are extracted using symmetric and asymmetric windows surrounding each candidate swallow to feed into a Random Forest classifier. Independently, a LSTM network is trained from raw data using automated feature learning methods. In an in-lab study comprising confounding activities of 10 participants, results show a 3.34 RMSE of swallow count using LSTM, and a 76.07% average F-measure of swallows, outperforming the Random Forest classifier. This system thus shows promise in accurately detecting and characterizing eating patterns, enabling passive detection of swallow count, and paving the way for timely interventions to prevent problematic eating.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129361330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}