Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers最新文献
Youwei Zeng, Zhaopeng Liu, Dan Wu, Jinyi Liu, Jie Zhang, Daqing Zhang
In recent years, we have seen efforts made to simultaneously monitor the respiration of multiple persons based on the channel state information (CSI) retrieved from commodity WiFi devices. However, existing approaches only work when multiple persons exhibit dramatically different respiration rates and the performance degrades significantly when the targeted subjects have similar rates. What's more, they can only obtain the average respiration rate over a period of time and fail to capture the detailed rate change over time. These two constraints greatly limit the application of the proposed approaches in real life. Different from the existing approaches that apply spectral analysis to the CSI amplitude (or phase difference) to obtain respiration rate information, we leverage the multiple antennas provided by the commodity WiFi hardware and model the multi-person respiration sensing as a blind source separation (BSS) problem. Then, we solve it using independent component analysis (ICA) to obtain the reparation information of each person. In this demo, we will demonstrate MultiSense - a multi-person respiration monitoring system using COTS WiFi devices.
{"title":"A multi-person respiration monitoring system using COTS wifi devices","authors":"Youwei Zeng, Zhaopeng Liu, Dan Wu, Jinyi Liu, Jie Zhang, Daqing Zhang","doi":"10.1145/3410530.3414325","DOIUrl":"https://doi.org/10.1145/3410530.3414325","url":null,"abstract":"In recent years, we have seen efforts made to simultaneously monitor the respiration of multiple persons based on the channel state information (CSI) retrieved from commodity WiFi devices. However, existing approaches only work when multiple persons exhibit dramatically different respiration rates and the performance degrades significantly when the targeted subjects have similar rates. What's more, they can only obtain the average respiration rate over a period of time and fail to capture the detailed rate change over time. These two constraints greatly limit the application of the proposed approaches in real life. Different from the existing approaches that apply spectral analysis to the CSI amplitude (or phase difference) to obtain respiration rate information, we leverage the multiple antennas provided by the commodity WiFi hardware and model the multi-person respiration sensing as a blind source separation (BSS) problem. Then, we solve it using independent component analysis (ICA) to obtain the reparation information of each person. In this demo, we will demonstrate MultiSense - a multi-person respiration monitoring system using COTS WiFi devices.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"24 50","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91505931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the biggest challenges of activity data collection is the unavoidability of relying on users and keep them engaged to provide labels consistently. Recent breakthroughs in mobile platforms have proven effective in bringing deep neural networks powered intelligence into mobile devices. In this study, we propose on-device personalization using fine-tuning convolutional neural networks as a mechanism in optimizing human effort in data labeling. First, we transfer the knowledge gained by on-cloud pre-training based on crowdsourced data to mobile devices. Second, we incrementally fine-tune a personalized model on every individual device using its locally accumulated input. Then, we utilize estimated activities customized according to the on-device model inference as feedback to motivate participants to improve data labeling. We conducted a verification study and gathered activity labels with smartphone sensors. Our preliminary evaluation results indicate that the proposed method outperformed the baseline method by approximately 8% regarding accuracy recognition.
{"title":"Improving activity data collection with on-device personalization using fine-tuning","authors":"Nattaya Mairittha, Tittaya Mairittha, Sozo Inoue","doi":"10.1145/3410530.3414370","DOIUrl":"https://doi.org/10.1145/3410530.3414370","url":null,"abstract":"One of the biggest challenges of activity data collection is the unavoidability of relying on users and keep them engaged to provide labels consistently. Recent breakthroughs in mobile platforms have proven effective in bringing deep neural networks powered intelligence into mobile devices. In this study, we propose on-device personalization using fine-tuning convolutional neural networks as a mechanism in optimizing human effort in data labeling. First, we transfer the knowledge gained by on-cloud pre-training based on crowdsourced data to mobile devices. Second, we incrementally fine-tune a personalized model on every individual device using its locally accumulated input. Then, we utilize estimated activities customized according to the on-device model inference as feedback to motivate participants to improve data labeling. We conducted a verification study and gathered activity labels with smartphone sensors. Our preliminary evaluation results indicate that the proposed method outperformed the baseline method by approximately 8% regarding accuracy recognition.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88509283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carolin Lübbe, Björn Friedrich, Sebastian J. F. Fudickar, S. Hellmers, A. Hein
The The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data addresses the important issue about care and the need for assistance systems in the nursing profession like automatic documentation systems. Data of 12 different care activities were recorded with an accelerometer attached to the right arm of the nurses. Both, laboratory and field data were taken into account. The task was to classify each activity based on the accelerometer data. We participated as team Gudetama in the challenge. We trained a Random Forest classifier and achieved an accuracy of 61.11% on our internal test set.
{"title":"Feature based random forest nurse care activity recognition using accelerometer data","authors":"Carolin Lübbe, Björn Friedrich, Sebastian J. F. Fudickar, S. Hellmers, A. Hein","doi":"10.1145/3410530.3414340","DOIUrl":"https://doi.org/10.1145/3410530.3414340","url":null,"abstract":"The The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data addresses the important issue about care and the need for assistance systems in the nursing profession like automatic documentation systems. Data of 12 different care activities were recorded with an accelerometer attached to the right arm of the nurses. Both, laboratory and field data were taken into account. The task was to classify each activity based on the accelerometer data. We participated as team Gudetama in the challenge. We trained a Random Forest classifier and achieved an accuracy of 61.11% on our internal test set.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90779383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Cognitive Load Monitoring Challenge organized in the UbiTtention 2020 workshop tasked the research community with the problem of inferring a user's cognitive load from physiological measurements recorded by a low-cost wearable. This is challenging due to the subjective nature of these physiological characteristics: In contrast to related problems involving objective measurements of physical phenomena (e.g., Activity Recognition from smartphone sensors), subjects' physiological response patterns under cognitive load may be highly individual, i.e., expose significant inter-subject variance. However, models trained on datasets compiled in laboratory settings should also deliver accurate classifications when applied to measurements from novel subjects. In this work, we study the applicability of established Deep Learning models for time series classification on this challenging problem. We examine different kinds of data normalization and investigate a variant of data augmentation.
{"title":"Deep learning for cognitive load monitoring: a comparative evaluation","authors":"Andrea Salfinger","doi":"10.1145/3410530.3414433","DOIUrl":"https://doi.org/10.1145/3410530.3414433","url":null,"abstract":"The Cognitive Load Monitoring Challenge organized in the UbiTtention 2020 workshop tasked the research community with the problem of inferring a user's cognitive load from physiological measurements recorded by a low-cost wearable. This is challenging due to the subjective nature of these physiological characteristics: In contrast to related problems involving objective measurements of physical phenomena (e.g., Activity Recognition from smartphone sensors), subjects' physiological response patterns under cognitive load may be highly individual, i.e., expose significant inter-subject variance. However, models trained on datasets compiled in laboratory settings should also deliver accurate classifications when applied to measurements from novel subjects. In this work, we study the applicability of established Deep Learning models for time series classification on this challenging problem. We examine different kinds of data normalization and investigate a variant of data augmentation.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87150318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyun Liu, Li Kou, Le Yang, Wenhui Fan, Cheng Wu
The learning-based methods have been widely applied to design a fault diagnosis model for rolling element bearing. However, the mainstream methods can only deal with the large training dataset, which is always violated in practical application. In this paper, we propose a physical knowledge-based hierarchical extreme learning machine(H-ELM) approach to adapt the problem of fault diagnosis for bearing with the small and imbalanced dataset. First, the proposed method uses the simple feature extraction algorithm to build a knowledge base for sample selection from the historical database, and the given training dataset is augmented with knowledge base. Second, a modified H-ELM algorithm is developed to identify fault location and recognize fault severity ranking based on the augmented dataset. Third, we design a self-optimizing module to optimize the sample selection and improve the performance of the H-ELM network. To evaluate the effectiveness of the proposed approach, the H-ELM without knowledge base and data augmentation-based support vector machine(SVM), back propagation neuron networks(BPNN) and deep belief networks(DBN) are tested in the numerical experiments to present a comprehensive comparison. The experimental results demonstrate that our approach outperforms in accuracy than other counterparts when dealing with the small and imbalanced datasets.
{"title":"A physical knowledge-based extreme learning machine approach to fault diagnosis of rolling element bearing from small datasets","authors":"Tianyun Liu, Li Kou, Le Yang, Wenhui Fan, Cheng Wu","doi":"10.1145/3410530.3414592","DOIUrl":"https://doi.org/10.1145/3410530.3414592","url":null,"abstract":"The learning-based methods have been widely applied to design a fault diagnosis model for rolling element bearing. However, the mainstream methods can only deal with the large training dataset, which is always violated in practical application. In this paper, we propose a physical knowledge-based hierarchical extreme learning machine(H-ELM) approach to adapt the problem of fault diagnosis for bearing with the small and imbalanced dataset. First, the proposed method uses the simple feature extraction algorithm to build a knowledge base for sample selection from the historical database, and the given training dataset is augmented with knowledge base. Second, a modified H-ELM algorithm is developed to identify fault location and recognize fault severity ranking based on the augmented dataset. Third, we design a self-optimizing module to optimize the sample selection and improve the performance of the H-ELM network. To evaluate the effectiveness of the proposed approach, the H-ELM without knowledge base and data augmentation-based support vector machine(SVM), back propagation neuron networks(BPNN) and deep belief networks(DBN) are tested in the numerical experiments to present a comprehensive comparison. The experimental results demonstrate that our approach outperforms in accuracy than other counterparts when dealing with the small and imbalanced datasets.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"2873 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86511062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Promit Basak, Shahamat Mustavi Tasin, Malisha Islam Tapotee, Md. Mamun Sheikh, A. Sakib, Sriman Bidhan Baray, M. Ahad
Human activity recognition has important applications in healthcare, human-computer interactions and other arenas. The direct interaction between the nurse and patient can play a pivotal role in healthcare. Recognizing various activities of nurses can improve healthcare in many ways. However, it is a very daunting task due to the complexities of the activities. "The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data'' provides sensor-based accelerometer data to predict 12 activities conducted by the nurses in both the lab and real-life settings. The main difficulty of this dataset is to process the raw data because of a high imbalance among different classes. Besides, all activities have not been performed by all subjects. Our team, 'Team Apophis' has processed the data by filtering noise, applying windowing technique on time and frequency domain to extract various features from lab and field data distinctly. After merging lab and field data, the 10-fold cross-validation technique has been applied to find out the model of best performance. We have obtained a promising accuracy of 65% with an F1 score of 40% on this challenging dataset by using the Random Forest classifier.
{"title":"Complex nurse care activity recognition using statistical features","authors":"Promit Basak, Shahamat Mustavi Tasin, Malisha Islam Tapotee, Md. Mamun Sheikh, A. Sakib, Sriman Bidhan Baray, M. Ahad","doi":"10.1145/3410530.3414338","DOIUrl":"https://doi.org/10.1145/3410530.3414338","url":null,"abstract":"Human activity recognition has important applications in healthcare, human-computer interactions and other arenas. The direct interaction between the nurse and patient can play a pivotal role in healthcare. Recognizing various activities of nurses can improve healthcare in many ways. However, it is a very daunting task due to the complexities of the activities. \"The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data'' provides sensor-based accelerometer data to predict 12 activities conducted by the nurses in both the lab and real-life settings. The main difficulty of this dataset is to process the raw data because of a high imbalance among different classes. Besides, all activities have not been performed by all subjects. Our team, 'Team Apophis' has processed the data by filtering noise, applying windowing technique on time and frequency domain to extract various features from lab and field data distinctly. After merging lab and field data, the 10-fold cross-validation technique has been applied to find out the model of best performance. We have obtained a promising accuracy of 65% with an F1 score of 40% on this challenging dataset by using the Random Forest classifier.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85157413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lanqing Yang, Honglu Li, Zhaoxi Chen, Xiaoyu Ji, Yi-Chao Chen, Guangtao Xue, Chuang-Wen You
Recognizing the working appliances is of great importance for smart environment to provide services including energy conservation, user activity recognition, fire hazard prevention, etc. There have been many methods proposed to recognize appliances by analyzing the power voltage, current, electromagnetic emissions, vibration, light, and sound from appliances. Among these methods, measuring the power voltage and current requires installing intrusive sensors to each appliance. Measuring the electromagnetic emissions and vibration requires sensors to be attached or close (e.g., < 15cm) to the appliances. Methods relying on light are not universally applicable since only part of appliances generate light. Similarly, methods using sound relying on the sound from motor vibration or mechanical collision so are not applicable for many appliances. As a result, existing methods for appliance fingerprinting are intrusive, have high deployment cost, or only work for part of appliances. In this work, we proposed to use the inaudible high-frequency sound generated by the switching-mode power supply (SMPS) of the appliances as fingerprints to recognize appliances. Since SMPS is widely adopted in home appliances, the proposed method can work for most appliances. Our preliminary experiments on 18 household appliances (where 10 are of the same models) showed that the recognition accuracy achieves 97.6%.
{"title":"Appliance fingerprinting using sound from power supply","authors":"Lanqing Yang, Honglu Li, Zhaoxi Chen, Xiaoyu Ji, Yi-Chao Chen, Guangtao Xue, Chuang-Wen You","doi":"10.1145/3410530.3414385","DOIUrl":"https://doi.org/10.1145/3410530.3414385","url":null,"abstract":"Recognizing the working appliances is of great importance for smart environment to provide services including energy conservation, user activity recognition, fire hazard prevention, etc. There have been many methods proposed to recognize appliances by analyzing the power voltage, current, electromagnetic emissions, vibration, light, and sound from appliances. Among these methods, measuring the power voltage and current requires installing intrusive sensors to each appliance. Measuring the electromagnetic emissions and vibration requires sensors to be attached or close (e.g., < 15cm) to the appliances. Methods relying on light are not universally applicable since only part of appliances generate light. Similarly, methods using sound relying on the sound from motor vibration or mechanical collision so are not applicable for many appliances. As a result, existing methods for appliance fingerprinting are intrusive, have high deployment cost, or only work for part of appliances. In this work, we proposed to use the inaudible high-frequency sound generated by the switching-mode power supply (SMPS) of the appliances as fingerprints to recognize appliances. Since SMPS is widely adopted in home appliances, the proposed method can work for most appliances. Our preliminary experiments on 18 household appliances (where 10 are of the same models) showed that the recognition accuracy achieves 97.6%.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"67 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87639396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we examine the suitability of automatic facial expression recognition to be used for satisfaction analysis in an Empathic Building environment. We use machine learning based facial expression recognition on the working stations to integrate an online satisfaction index into Empathic Building platform. To analyze the suitability of facial expression recognition to reflect longer-term satisfaction, we examine the changes and trends in the happiness curves of our test users. We also correlate the happiness curve with temperature, humidity, and light intensity of the test users' local city (Tampere Finland). The results indicate that the proposed analysis indeed shows some trends that may be used for long-term satisfaction analysis in different kinds of intelligent buildings.
{"title":"Facial expression based satisfaction index for empathic buildings","authors":"Fahad Sohrab, Jenni Raitoharju, M. Gabbouj","doi":"10.1145/3410530.3414443","DOIUrl":"https://doi.org/10.1145/3410530.3414443","url":null,"abstract":"In this work, we examine the suitability of automatic facial expression recognition to be used for satisfaction analysis in an Empathic Building environment. We use machine learning based facial expression recognition on the working stations to integrate an online satisfaction index into Empathic Building platform. To analyze the suitability of facial expression recognition to reflect longer-term satisfaction, we examine the changes and trends in the happiness curves of our test users. We also correlate the happiness curve with temperature, humidity, and light intensity of the test users' local city (Tampere Finland). The results indicate that the proposed analysis indeed shows some trends that may be used for long-term satisfaction analysis in different kinds of intelligent buildings.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"103 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87736118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To recognize locomotion and transportation modes in a user-independent manner with an unknown target phone position, we (team Eagles) propose an approach based on two main steps: reduction of the impact of regular effects that stem from each phone position, followed by the recognition of the appropriate activity. The general architecture is composed of three groups of neural networks organized in the following order. The first group allows the recognition of the source, the second group allows the normalization of data to neutralize the impact of the source on the activity learning process, and the last group allows the recognition of the activity itself. We perform extensive experiments and the preliminary results encourage us to follow this direction, including the source learning to reduce the phone position's biases and activity separately.
{"title":"A multi-view architecture for the SHL challenge","authors":"Massinissa Hamidi, A. Osmani, Pegah Alizadeh","doi":"10.1145/3410530.3414351","DOIUrl":"https://doi.org/10.1145/3410530.3414351","url":null,"abstract":"To recognize locomotion and transportation modes in a user-independent manner with an unknown target phone position, we (team Eagles) propose an approach based on two main steps: reduction of the impact of regular effects that stem from each phone position, followed by the recognition of the appropriate activity. The general architecture is composed of three groups of neural networks organized in the following order. The first group allows the recognition of the source, the second group allows the normalization of data to neutralize the impact of the source on the activity learning process, and the last group allows the recognition of the activity itself. We perform extensive experiments and the preliminary results encourage us to follow this direction, including the source learning to reduce the phone position's biases and activity separately.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83393815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Respiratory related events (RE) during nocturnal sleep disturb the natural physiological pattern of sleep. This events may include all types of apnea and hypopnea, respiratory-event-related arousals and snoring. The particular importance of breath analysis is currently associated with the COVID-19 pandemic. The proposed algorithm is a deep learning model with long short-term memory cells for RE detection for each 1 minute epoch during nocturnal sleep. Our approach provides the basis for a smartwatch based respiratory-related sleep pattern analysis (accuracy of epoch-by-epoch classification is greater than 80 %), can be applied for a potential risk of respiratory-related diseases screening (mean absolute error of AHI estimation is about 6.5 events/h on the test set, which includes participants with all types of apnea severity; two class screening accuracy (AHI threshold is 15 events/h) is greater than 90 %).
{"title":"Respiratory events screening using consumer smartwatches","authors":"Illia Fedorin, Kostyantyn Slyusarenko, Margaryta Nastenko","doi":"10.1145/3410530.3414399","DOIUrl":"https://doi.org/10.1145/3410530.3414399","url":null,"abstract":"Respiratory related events (RE) during nocturnal sleep disturb the natural physiological pattern of sleep. This events may include all types of apnea and hypopnea, respiratory-event-related arousals and snoring. The particular importance of breath analysis is currently associated with the COVID-19 pandemic. The proposed algorithm is a deep learning model with long short-term memory cells for RE detection for each 1 minute epoch during nocturnal sleep. Our approach provides the basis for a smartwatch based respiratory-related sleep pattern analysis (accuracy of epoch-by-epoch classification is greater than 80 %), can be applied for a potential risk of respiratory-related diseases screening (mean absolute error of AHI estimation is about 6.5 events/h on the test set, which includes participants with all types of apnea severity; two class screening accuracy (AHI threshold is 15 events/h) is greater than 90 %).","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83769477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers