Pub Date : 2021-01-01Epub Date: 2020-12-30DOI: 10.1145/3417958
Nathan C Hurley, Erica S Spatz, Harlan M Krumholz, Roozbeh Jafari, Bobak J Mortazavi
Cardiovascular disorders cause nearly one in three deaths in the United States. Short- and long-term care for these disorders is often determined in short-term settings. However, these decisions are made with minimal longitudinal and long-term data. To overcome this bias towards data from acute care settings, improved longitudinal monitoring for cardiovascular patients is needed. Longitudinal monitoring provides a more comprehensive picture of patient health, allowing for informed decision making. This work surveys sensing and machine learning in the field of remote health monitoring for cardiovascular disorders. We highlight three needs in the design of new smart health technologies: (1) need for sensing technologies that track longitudinal trends of the cardiovascular disorder despite infrequent, noisy, or missing data measurements; (2) need for new analytic techniques designed in a longitudinal, continual fashion to aid in the development of new risk prediction techniques and in tracking disease progression; and (3) need for personalized and interpretable machine learning techniques, allowing for advancements in clinical decision making. We highlight these needs based upon the current state of the art in smart health technologies and analytics. We then discuss opportunities in addressing these needs for development of smart health technologies for the field of cardiovascular disorders and care.
{"title":"A Survey of Challenges and Opportunities in Sensing and Analytics for Risk Factors of Cardiovascular Disorders.","authors":"Nathan C Hurley, Erica S Spatz, Harlan M Krumholz, Roozbeh Jafari, Bobak J Mortazavi","doi":"10.1145/3417958","DOIUrl":"10.1145/3417958","url":null,"abstract":"<p><p>Cardiovascular disorders cause nearly one in three deaths in the United States. Short- and long-term care for these disorders is often determined in short-term settings. However, these decisions are made with minimal longitudinal and long-term data. To overcome this bias towards data from acute care settings, improved longitudinal monitoring for cardiovascular patients is needed. Longitudinal monitoring provides a more comprehensive picture of patient health, allowing for informed decision making. This work surveys sensing and machine learning in the field of remote health monitoring for cardiovascular disorders. We highlight three needs in the design of new smart health technologies: (1) need for sensing technologies that track longitudinal trends of the cardiovascular disorder despite infrequent, noisy, or missing data measurements; (2) need for new analytic techniques designed in a longitudinal, continual fashion to aid in the development of new risk prediction techniques and in tracking disease progression; and (3) need for personalized and interpretable machine learning techniques, allowing for advancements in clinical decision making. We highlight these needs based upon the current state of the art in smart health technologies and analytics. We then discuss opportunities in addressing these needs for development of smart health technologies for the field of cardiovascular disorders and care.</p>","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8320445/pdf/nihms-1670305.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39274866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taylor R. Mauldin, A. Ngu, V. Metsis, Marc E. Canby
This article presents an in-depth experimental study of Ensemble Deep Learning techniques on small datasets for the analysis of time-series data generated by wearable devices. Deep Learning networks generally require large datasets for training. In some health care applications, such as the real-time smartwatch-based fall detection, there are no publicly available, large, annotated datasets that can be used for training, due to the nature of the problem (i.e., a fall is not a common event). We conducted a series of offline experiments using two different datasets of simulated falls for training various ensemble models. Our offline experimental results show that an ensemble of Recurrent Neural Network (RNN) models, combined by the stacking ensemble technique, outperforms a single RNN model trained on the same data samples. Nonetheless, fall detection models trained on simulated falls and activities of daily living performed by test subjects in a controlled environment, suffer from low precision due to high false-positive rates. In this work, through a set of real-world experiments, we demonstrate that the low precision can be mitigated via the collection of false-positive feedback by the end-users. The final Ensemble RNN model, after re-training with real-world user archived data and feedback, achieved a significantly higher precision without reducing much of the recall in a real-world setting.
{"title":"Ensemble Deep Learning on Wearables Using Small Datasets","authors":"Taylor R. Mauldin, A. Ngu, V. Metsis, Marc E. Canby","doi":"10.1145/3428666","DOIUrl":"https://doi.org/10.1145/3428666","url":null,"abstract":"This article presents an in-depth experimental study of Ensemble Deep Learning techniques on small datasets for the analysis of time-series data generated by wearable devices. Deep Learning networks generally require large datasets for training. In some health care applications, such as the real-time smartwatch-based fall detection, there are no publicly available, large, annotated datasets that can be used for training, due to the nature of the problem (i.e., a fall is not a common event). We conducted a series of offline experiments using two different datasets of simulated falls for training various ensemble models. Our offline experimental results show that an ensemble of Recurrent Neural Network (RNN) models, combined by the stacking ensemble technique, outperforms a single RNN model trained on the same data samples. Nonetheless, fall detection models trained on simulated falls and activities of daily living performed by test subjects in a controlled environment, suffer from low precision due to high false-positive rates. In this work, through a set of real-world experiments, we demonstrate that the low precision can be mitigated via the collection of false-positive feedback by the end-users. The final Ensemble RNN model, after re-training with real-world user archived data and feedback, achieved a significantly higher precision without reducing much of the recall in a real-world setting.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3428666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45821248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aftab Khan, Alexandros Zenonos, G. Kalogridis, Yaowei Wang, Stefanos Vatsikas, M. Sooriyabandara
Automated mood recognition has been studied in recent times with great emphasis on stress in particular. Other affective states are also of great importance, as studying them can help in understanding human behaviours in more detail. Most of the studies conducted in the realisation of an automated system that is capable of recognising human moods have established that mood is personal—that is, mood perception differs amongst individuals. Previous machine learning--based frameworks confirm this hypothesis, with personalised models almost always outperforming the generalised methods. In this article, we propose a novel system for grouping individuals in what we refer to as “perception clusters” based on their physiological signals. We evaluate perception clusters with a trial of nine users in a work environment, recording physiological and activity data for at least 10 days. Our results reveal no significant difference in performance with respect to a personalised approach and that our method performs equally better against traditional generalised methods. Such an approach significantly reduces computational requirements that are otherwise necessary for personalised approaches requiring individual models developed separately for each user. Further, perception clusters manifest a direction towards semi-supervised affective modelling in which individual perceptions are inferred from the data.
{"title":"Perception Clusters","authors":"Aftab Khan, Alexandros Zenonos, G. Kalogridis, Yaowei Wang, Stefanos Vatsikas, M. Sooriyabandara","doi":"10.1145/3422819","DOIUrl":"https://doi.org/10.1145/3422819","url":null,"abstract":"Automated mood recognition has been studied in recent times with great emphasis on stress in particular. Other affective states are also of great importance, as studying them can help in understanding human behaviours in more detail. Most of the studies conducted in the realisation of an automated system that is capable of recognising human moods have established that mood is personal—that is, mood perception differs amongst individuals. Previous machine learning--based frameworks confirm this hypothesis, with personalised models almost always outperforming the generalised methods. In this article, we propose a novel system for grouping individuals in what we refer to as “perception clusters” based on their physiological signals. We evaluate perception clusters with a trial of nine users in a work environment, recording physiological and activity data for at least 10 days. Our results reveal no significant difference in performance with respect to a personalised approach and that our method performs equally better against traditional generalised methods. Such an approach significantly reduces computational requirements that are otherwise necessary for personalised approaches requiring individual models developed separately for each user. Further, perception clusters manifest a direction towards semi-supervised affective modelling in which individual perceptions are inferred from the data.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 16"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422819","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44048630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clinical trials are important tools to improve knowledge about the effectiveness of new treatments for all diseases, including cancers. However, studies show that fewer than 5% of cancer patients are enrolled in any type of research study or clinical trial. Although there is a wide variety of reasons for the low participation rate, we address this issue by designing a chatbot to help users determine their eligibility via interactive, two-way communication. The chatbot is supported by a user-centered classifier that uses an active deep learning approach to separate complex eligibility criteria into questions that can be easily answered by users and information that requires verification by their doctors. We collected all the available clinical trial eligibility criteria from the National Cancer Institute's website to evaluate the chatbot and the classifier. Experimental results show that the active deep learning classifier outperforms the baseline k-nearest neighbor method. In addition, an in-person experiment was conducted to evaluate the effectiveness of the chatbot. The results indicate that the participants who used the chatbot achieved better understanding about eligibility than those who used only the website. Furthermore, interfaces with chatbots were rated significantly better in terms of perceived usability, interactivity, and dialogue.
{"title":"Creating and Evaluating Chatbots as Eligibility Assistants for Clinical Trials","authors":"C. Chuan, Susan Morgan","doi":"10.1145/3403575","DOIUrl":"https://doi.org/10.1145/3403575","url":null,"abstract":"Clinical trials are important tools to improve knowledge about the effectiveness of new treatments for all diseases, including cancers. However, studies show that fewer than 5% of cancer patients are enrolled in any type of research study or clinical trial. Although there is a wide variety of reasons for the low participation rate, we address this issue by designing a chatbot to help users determine their eligibility via interactive, two-way communication. The chatbot is supported by a user-centered classifier that uses an active deep learning approach to separate complex eligibility criteria into questions that can be easily answered by users and information that requires verification by their doctors. We collected all the available clinical trial eligibility criteria from the National Cancer Institute's website to evaluate the chatbot and the classifier. Experimental results show that the active deep learning classifier outperforms the baseline k-nearest neighbor method. In addition, an in-person experiment was conducted to evaluate the effectiveness of the chatbot. The results indicate that the participants who used the chatbot achieved better understanding about eligibility than those who used only the website. Furthermore, interfaces with chatbots were rated significantly better in terms of perceived usability, interactivity, and dialogue.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3403575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47567756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing popularity of wearable consumer products can play a significant role in the healthcare sector. The recognition of human activities from IoT is an important building block in this context. While the analysis of the generated datastream can have many benefits from a health point of view, it can also lead to privacy threats by exposing highly sensitive information. In this article, we propose a framework that relies on machine learning to efficiently recognise the user activity, useful for personal healthcare monitoring, while limiting the risk of users re-identification from biometric patterns characterizing each individual. To achieve that, we show that features in temporal domain are useful to discriminate user activity while features in frequency domain lead to distinguish the user identity. We then design a novel protection mechanism processing the raw signal on the user’s smartphone to select relevant features for activity recognition and normalise features sensitive to re-identification. These unlinkable features are then transferred to the application server. We extensively evaluate our framework with reference datasets: Results show an accurate activity recognition (87%) while limiting the re-identification rate (33%). This represents a slight decrease of utility (9%) against a large privacy improvement (53%) compared to state-of-the-art baselines.
{"title":"Privacy-preserving IoT Framework for Activity Recognition in Personal Healthcare Monitoring","authors":"T. Jourdan, A. Boutet, A. Bahi, Carole Frindel","doi":"10.1145/3416947","DOIUrl":"https://doi.org/10.1145/3416947","url":null,"abstract":"The increasing popularity of wearable consumer products can play a significant role in the healthcare sector. The recognition of human activities from IoT is an important building block in this context. While the analysis of the generated datastream can have many benefits from a health point of view, it can also lead to privacy threats by exposing highly sensitive information. In this article, we propose a framework that relies on machine learning to efficiently recognise the user activity, useful for personal healthcare monitoring, while limiting the risk of users re-identification from biometric patterns characterizing each individual. To achieve that, we show that features in temporal domain are useful to discriminate user activity while features in frequency domain lead to distinguish the user identity. We then design a novel protection mechanism processing the raw signal on the user’s smartphone to select relevant features for activity recognition and normalise features sensitive to re-identification. These unlinkable features are then transferred to the application server. We extensively evaluate our framework with reference datasets: Results show an accurate activity recognition (87%) while limiting the re-identification rate (33%). This represents a slight decrease of utility (9%) against a large privacy improvement (53%) compared to state-of-the-art baselines.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3416947","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41965873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niels van Berkel, O. Ahmad, D. Stoyanov, L. Lovat, A. Blandford
Colonoscopy, the visual inspection of the large bowel using an endoscope, offers protection against colorectal cancer by allowing for the detection and removal of pre-cancerous polyps. The literature on polyp detection shows widely varying miss rates among clinicians, with averages ranging around 22%--27%. While recent work has considered the use of AI support systems for polyp detection, how to visualise and integrate these systems into clinical practice is an open question. In this work, we explore the design of visual markers as used in an AI support system for colonoscopy. Supported by the gastroenterologists in our team, we designed seven unique visual markers and rendered them on real-life patient video footage. Through an online survey targeting relevant clinical staff (N = 36), we evaluated these designs and obtained initial insights and understanding into the way in which clinical staff envision AI to integrate in their daily work-environment. Our results provide concrete recommendations for the future deployment of AI support systems in continuous, adaptive scenarios.
{"title":"Designing Visual Markers for Continuous Artificial Intelligence Support","authors":"Niels van Berkel, O. Ahmad, D. Stoyanov, L. Lovat, A. Blandford","doi":"10.1145/3422156","DOIUrl":"https://doi.org/10.1145/3422156","url":null,"abstract":"Colonoscopy, the visual inspection of the large bowel using an endoscope, offers protection against colorectal cancer by allowing for the detection and removal of pre-cancerous polyps. The literature on polyp detection shows widely varying miss rates among clinicians, with averages ranging around 22%--27%. While recent work has considered the use of AI support systems for polyp detection, how to visualise and integrate these systems into clinical practice is an open question. In this work, we explore the design of visual markers as used in an AI support system for colonoscopy. Supported by the gastroenterologists in our team, we designed seven unique visual markers and rendered them on real-life patient video footage. Through an online survey targeting relevant clinical staff (N = 36), we evaluated these designs and obtained initial insights and understanding into the way in which clinical staff envision AI to integrate in their daily work-environment. Our results provide concrete recommendations for the future deployment of AI support systems in continuous, adaptive scenarios.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":" ","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422156","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44225886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aftab Khan, Sebastian Mellor, R. King, Balazs Janko, W. Harwin, R. Sherratt, I. Craddock, T. Plötz
Human activity recognition is progressing from automatically determining what a person is doing and when, to additionally analyzing the quality of these activities—typically referred to as skill assessment. In this chapter, we propose a new framework for skill assessment that generalizes across application domains and can be deployed for near-real-time applications. It is based on the notion of repeatability of activities defining skill. The analysis is based on two subsequent classification steps that analyze (1) movements or activities and (2) their qualities, that is, the actual skills of a human performing them. The first classifier is trained in either a supervised or unsupervised manner and provides confidence scores, which are then used for assessing skills. We evaluate the proposed method in two scenarios: gymnastics and surgical skill training of medical students. We demonstrate both the overall effectiveness and efficiency of the generalized assessment method, especially compared to previous work.
{"title":"Generalized and Efficient Skill Assessment from IMU Data with Applications in Gymnastics and Medical Training","authors":"Aftab Khan, Sebastian Mellor, R. King, Balazs Janko, W. Harwin, R. Sherratt, I. Craddock, T. Plötz","doi":"10.1145/3422168","DOIUrl":"https://doi.org/10.1145/3422168","url":null,"abstract":"Human activity recognition is progressing from automatically determining what a person is doing and when, to additionally analyzing the quality of these activities—typically referred to as skill assessment. In this chapter, we propose a new framework for skill assessment that generalizes across application domains and can be deployed for near-real-time applications. It is based on the notion of repeatability of activities defining skill. The analysis is based on two subsequent classification steps that analyze (1) movements or activities and (2) their qualities, that is, the actual skills of a human performing them. The first classifier is trained in either a supervised or unsupervised manner and provides confidence scores, which are then used for assessing skills. We evaluate the proposed method in two scenarios: gymnastics and surgical skill training of medical students. We demonstrate both the overall effectiveness and efficiency of the generalized assessment method, especially compared to previous work.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 21"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422168","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46052762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the widespread use of smartphones and wearable health sensors, a plethora of mobile health (mHealth) applications to track well-being, run human behavioral studies, and clinical trials have emerged in recent years. However, the design, development, and deployment of mHealth applications is challenging in many ways. To address these challenges, several generic mobile sensing frameworks have been researched in the past decade. Such frameworks assist developers and researchers in reducing the complexity, time, and cost required to build and deploy health-sensing applications. The main goal of this article is to provide the reader with an overview of the state-of-the-art of health-focused generic mobile and wearable sensing frameworks. This review gives a detailed analysis of functional and non-functional features of existing frameworks, the health studies they were used in, and the stakeholders they support. Additionally, we also analyze the historical evolution, uptake, and maintenance after the initial release. Based on this analysis, we suggest new features and opportunities for future generic mHealth sensing frameworks.
{"title":"Mobile and Wearable Sensing Frameworks for mHealth Studies and Applications","authors":"Devender Kumar, S. Jeuris, J. Bardram, N. Dragoni","doi":"10.1145/3422158","DOIUrl":"https://doi.org/10.1145/3422158","url":null,"abstract":"With the widespread use of smartphones and wearable health sensors, a plethora of mobile health (mHealth) applications to track well-being, run human behavioral studies, and clinical trials have emerged in recent years. However, the design, development, and deployment of mHealth applications is challenging in many ways. To address these challenges, several generic mobile sensing frameworks have been researched in the past decade. Such frameworks assist developers and researchers in reducing the complexity, time, and cost required to build and deploy health-sensing applications. The main goal of this article is to provide the reader with an overview of the state-of-the-art of health-focused generic mobile and wearable sensing frameworks. This review gives a detailed analysis of functional and non-functional features of existing frameworks, the health studies they were used in, and the stakeholders they support. Additionally, we also analyze the historical evolution, uptake, and maintenance after the initial release. Based on this analysis, we suggest new features and opportunities for future generic mHealth sensing frameworks.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422158","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47998709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sizhe An, Ganapati Bhat, S. Gumussoy, Ümit Y. Ogras
Human activity recognition (HAR) has increased in recent years due to its applications in mobile health monitoring, activity recognition, and patient rehabilitation. The typical approach is training a HAR classifier offline with known users and then using the same classifier for new users. However, the accuracy for new users can be low with this approach if their activity patterns are different than those in the training data. At the same time, training from scratch for new users is not feasible for mobile applications due to the high computational cost and training time. To address this issue, we propose a HAR transfer learning framework with two components. First, a representational analysis reveals common features that can transfer across users and user-specific features that need to be customized. Using this insight, we transfer the reusable portion of the offline classifier to new users and fine-tune only the rest. Our experiments with five datasets show up to 43% accuracy improvement and 66% training time reduction when compared to the baseline without using transfer learning. Furthermore, measurements on the hardware platform reveal that the power and energy consumption decreased by 43% and 68%, respectively, while achieving the same or higher accuracy as training from scratch. Our code is released for reproducibility.1
{"title":"Transfer Learning for Human Activity Recognition Using Representational Analysis of Neural Networks","authors":"Sizhe An, Ganapati Bhat, S. Gumussoy, Ümit Y. Ogras","doi":"10.1145/3563948","DOIUrl":"https://doi.org/10.1145/3563948","url":null,"abstract":"Human activity recognition (HAR) has increased in recent years due to its applications in mobile health monitoring, activity recognition, and patient rehabilitation. The typical approach is training a HAR classifier offline with known users and then using the same classifier for new users. However, the accuracy for new users can be low with this approach if their activity patterns are different than those in the training data. At the same time, training from scratch for new users is not feasible for mobile applications due to the high computational cost and training time. To address this issue, we propose a HAR transfer learning framework with two components. First, a representational analysis reveals common features that can transfer across users and user-specific features that need to be customized. Using this insight, we transfer the reusable portion of the offline classifier to new users and fine-tune only the rest. Our experiments with five datasets show up to 43% accuracy improvement and 66% training time reduction when compared to the baseline without using transfer learning. Furthermore, measurements on the hardware platform reveal that the power and energy consumption decreased by 43% and 68%, respectively, while achieving the same or higher accuracy as training from scratch. Our code is released for reproducibility.1","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"4 1","pages":"1 - 21"},"PeriodicalIF":0.0,"publicationDate":"2020-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49468295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chongyang Wang, Temitayo A. Olugbade, Akhil Mathur, A. Williams, N. Lane, N. Bianchi-Berthouze
In chronic pain rehabilitation, physiotherapists adapt physical activity to patients’ performance based on their expression of protective behavior, gradually exposing them to feared but harmless and essential everyday activities. As rehabilitation moves outside the clinic, technology should automatically detect such behavior to provide similar support. Previous works have shown the feasibility of automatic protective behavior detection (PBD) within a specific activity. In this article, we investigate the use of deep learning for PBD across activity types, using wearable motion capture and surface electromyography data collected from healthy participants and people with chronic pain. We approach the problem by continuously detecting protective behavior within an activity rather than estimating its overall presence. The best performance reaches mean F1 score of 0.82 with leave-one-subject-out cross validation. When protective behavior is modeled per activity type, performance achieves a mean F1 score of 0.77 for bend-down, 0.81 for one-leg-stand, 0.72 for sit-to-stand, 0.83 for stand-to-sit, and 0.67 for reach-forward. This performance reaches excellent level of agreement with the average experts’ rating performance suggesting potential for personalized chronic pain management at home. We analyze various parameters characterizing our approach to understand how the results could generalize to other PBD datasets and different levels of ground truth granularity.
{"title":"Chronic Pain Protective Behavior Detection with Deep Learning","authors":"Chongyang Wang, Temitayo A. Olugbade, Akhil Mathur, A. Williams, N. Lane, N. Bianchi-Berthouze","doi":"10.1145/3449068","DOIUrl":"https://doi.org/10.1145/3449068","url":null,"abstract":"In chronic pain rehabilitation, physiotherapists adapt physical activity to patients’ performance based on their expression of protective behavior, gradually exposing them to feared but harmless and essential everyday activities. As rehabilitation moves outside the clinic, technology should automatically detect such behavior to provide similar support. Previous works have shown the feasibility of automatic protective behavior detection (PBD) within a specific activity. In this article, we investigate the use of deep learning for PBD across activity types, using wearable motion capture and surface electromyography data collected from healthy participants and people with chronic pain. We approach the problem by continuously detecting protective behavior within an activity rather than estimating its overall presence. The best performance reaches mean F1 score of 0.82 with leave-one-subject-out cross validation. When protective behavior is modeled per activity type, performance achieves a mean F1 score of 0.77 for bend-down, 0.81 for one-leg-stand, 0.72 for sit-to-stand, 0.83 for stand-to-sit, and 0.67 for reach-forward. This performance reaches excellent level of agreement with the average experts’ rating performance suggesting potential for personalized chronic pain management at home. We analyze various parameters characterizing our approach to understand how the results could generalize to other PBD datasets and different levels of ground truth granularity.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2020-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3449068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43298593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}