Preventable adverse events as a result of medical errors present a growing concern in the healthcare system. As drug-drug interactions (DDIs) may lead to preventable adverse events, being able to extract DDIs from drug labels into a machine-processable form is an important step toward effective dissemination of drug safety information. Herein, we tackle the problem of jointly extracting mentions of drugs and their interactions, including interaction outcome, from drug labels. Our deep learning approach entails composing various intermediate representations, including graph-based context derived using graph convolutions (GCs) with a novel attention-based gating mechanism (holistically called GCA), which are combined in meaningful ways to predict on all subtasks jointly. Our model is trained and evaluated on the 2018 TAC DDI corpus. Our GCA model in conjunction with transfer learning performs at 39.20% F1 and 26.09% F1 on entity recognition (ER) and relation extraction (RE), respectively, on the first official test set and at 45.30% F1 and 27.87% F1 on ER and RE, respectively, on the second official test set. These updated results lead to improvements over our prior best by up to 6 absolute F1 points. After controlling for available training data, the proposed model exhibits state-of-the-art performance for this task.
{"title":"Attention-Gated Graph Convolutions for Extracting Drug Interaction Information from Drug Labels.","authors":"Tung Tran, Ramakanth Kavuluru, Halil Kilicoglu","doi":"10.1145/3423209","DOIUrl":"10.1145/3423209","url":null,"abstract":"<p><p>Preventable adverse events as a result of medical errors present a growing concern in the healthcare system. As drug-drug interactions (DDIs) may lead to preventable adverse events, being able to extract DDIs from drug labels into a machine-processable form is an important step toward effective dissemination of drug safety information. Herein, we tackle the problem of jointly extracting mentions of drugs and their interactions, including interaction <i>outcome</i>, from drug labels. Our deep learning approach entails composing various intermediate representations, including graph-based context derived using graph convolutions (GCs) with a novel attention-based gating mechanism (holistically called GCA), which are combined in meaningful ways to predict on all subtasks jointly. Our model is trained and evaluated on the 2018 TAC DDI corpus. Our GCA model in conjunction with transfer learning performs at 39.20% F1 and 26.09% F1 on entity recognition (ER) and relation extraction (RE), respectively, on the first official test set and at 45.30% F1 and 27.87% F1 on ER and RE, respectively, on the second official test set. These updated results lead to improvements over our prior best by up to 6 absolute F1 points. After controlling for available training data, the proposed model exhibits state-of-the-art performance for this task.</p>","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3423209","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39453024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Yarlagadda, D. M. Montserrat, D. Güera, C. Boushey, D. Kerr, F. Zhu
Advances in image-based dietary assessment methods have allowed nutrition professionals and researchers to improve the accuracy of dietary assessment, where images of food consumed are captured using smartphones or wearable devices. These images are then analyzed using computer vision methods to estimate energy and nutrition content of the foods. Food image segmentation, which determines the regions in an image where foods are located, plays an important role in this process. Current methods are data dependent and thus cannot generalize well for different food types. To address this problem, we propose a class-agnostic food image segmentation method. Our method uses a pair of eating scene images, one before starting eating and one after eating is completed. Using information from both the before and after eating images, we can segment food images by finding the salient missing objects without any prior information about the food class. We model a paradigm of top-down saliency that guides the attention of the human visual system based on a task to find the salient missing objects in a pair of images. Our method is validated on food images collected from a dietary study that showed promising results.
{"title":"Saliency-Aware Class-Agnostic Food Image Segmentation","authors":"S. Yarlagadda, D. M. Montserrat, D. Güera, C. Boushey, D. Kerr, F. Zhu","doi":"10.1145/3440274","DOIUrl":"https://doi.org/10.1145/3440274","url":null,"abstract":"Advances in image-based dietary assessment methods have allowed nutrition professionals and researchers to improve the accuracy of dietary assessment, where images of food consumed are captured using smartphones or wearable devices. These images are then analyzed using computer vision methods to estimate energy and nutrition content of the foods. Food image segmentation, which determines the regions in an image where foods are located, plays an important role in this process. Current methods are data dependent and thus cannot generalize well for different food types. To address this problem, we propose a class-agnostic food image segmentation method. Our method uses a pair of eating scene images, one before starting eating and one after eating is completed. Using information from both the before and after eating images, we can segment food images by finding the salient missing objects without any prior information about the food class. We model a paradigm of top-down saliency that guides the attention of the human visual system based on a task to find the salient missing objects in a pair of images. Our method is validated on food images collected from a dietary study that showed promising results.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 17"},"PeriodicalIF":0.0,"publicationDate":"2021-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3440274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45284937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While a large number of causal inference models for estimating individualized treatment effects (ITE) have been developed, selecting the best one poses a unique challenge, since the counterfactuals are never observed. The problem is challenged further in the unsupervised domain adaptation (UDA) setting where we have access to labeled samples in the source domain but desire selecting an ITE model that achieves good performance on a target domain where only unlabeled samples are available. Existing selection techniques for UDA are designed for predictive models and are sub-optimal for causal inference because they (1) do not account for the missing counterfactuals and (2) only examine the discriminative density ratios between the input covariates in the source and target domain and do not factor in the model’s predictions in the target domain. We leverage the invariance of causal structures across domains to introduce a novel model selection metric specifically designed for ITE models under UDA. We propose selecting models whose predictions of the effects of interventions satisfy invariant causal structures in the target domain. Experimentally, our method selects ITE models that are more robust to covariate shifts on a variety of datasets, including estimating the effect of ventilation in COVID-19 patients.
{"title":"Selecting Treatment Effects Models for Domain Adaptation Using Causal Knowledge","authors":"Trent Kyono, I. Bica, Z. Qian, M. van der Schaar","doi":"10.1145/3587695","DOIUrl":"https://doi.org/10.1145/3587695","url":null,"abstract":"While a large number of causal inference models for estimating individualized treatment effects (ITE) have been developed, selecting the best one poses a unique challenge, since the counterfactuals are never observed. The problem is challenged further in the unsupervised domain adaptation (UDA) setting where we have access to labeled samples in the source domain but desire selecting an ITE model that achieves good performance on a target domain where only unlabeled samples are available. Existing selection techniques for UDA are designed for predictive models and are sub-optimal for causal inference because they (1) do not account for the missing counterfactuals and (2) only examine the discriminative density ratios between the input covariates in the source and target domain and do not factor in the model’s predictions in the target domain. We leverage the invariance of causal structures across domains to introduce a novel model selection metric specifically designed for ITE models under UDA. We propose selecting models whose predictions of the effects of interventions satisfy invariant causal structures in the target domain. Experimentally, our method selects ITE models that are more robust to covariate shifts on a variety of datasets, including estimating the effect of ventilation in COVID-19 patients.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"4 1","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2021-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42443000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vital sign (e.g., respiration rate) monitoring has become increasingly more important because it offers useful clues about medical conditions such as sleep disorders. There is a compelling need for technologies that enable contact-free and easy deployment of vital sign monitoring over an extended period of time for healthcare. In this article, we present a SonarBeat system to leverage a phase-based active sonar to monitor respiration rates with smartphones. We provide a sonar phase analysis and discuss the technical challenges for respiration rate estimation utilizing an inaudible sound signal. Moreover, we design and implement the SonarBeat system, with components including signal generation, data extraction, received signal preprocessing, and breathing rate estimation with Android smartphones. Our extensive experimental results validate the superior performance of SonarBeat in different indoor environment settings.
{"title":"Smartphone Sonar-Based Contact-Free Respiration Rate Monitoring","authors":"Xuyu Wang, Runze Huang, Chao Yang, S. Mao","doi":"10.1145/3436822","DOIUrl":"https://doi.org/10.1145/3436822","url":null,"abstract":"Vital sign (e.g., respiration rate) monitoring has become increasingly more important because it offers useful clues about medical conditions such as sleep disorders. There is a compelling need for technologies that enable contact-free and easy deployment of vital sign monitoring over an extended period of time for healthcare. In this article, we present a SonarBeat system to leverage a phase-based active sonar to monitor respiration rates with smartphones. We provide a sonar phase analysis and discuss the technical challenges for respiration rate estimation utilizing an inaudible sound signal. Moreover, we design and implement the SonarBeat system, with components including signal generation, data extraction, received signal preprocessing, and breathing rate estimation with Android smartphones. Our extensive experimental results validate the superior performance of SonarBeat in different indoor environment settings.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3436822","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43394470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stein Kristiansen, K. Nikolaidis, T. Plagemann, V. Goebel, G. Traaen, B. Øverland, L. Aakerøy, T. Hunt, J. P. Loennechen, S. Steinshamn, C. Bendz, O. Anfinsen, L. Gullestad, H. Akre
Sleep apnea is a common and strongly under-diagnosed severe sleep-related respiratory disorder with periods of disrupted or reduced breathing during sleep. To diagnose sleep apnea, sleep data are collected with either polysomnography or polygraphy and scored by a sleep expert. We investigate in this work the use of supervised machine learning to automate the analysis of polygraphy data from the A3 study containing more than 7,400 hours of sleep monitoring data from 579 patients. We conduct a systematic comparative study of classification performance and resource use with different combinations of 27 classifiers and four sleep signals. The classifiers achieve up to 0.8941 accuracy (kappa: 0.7877) when using all four signal types simultaneously and up to 0.8543 accuracy (kappa: 0.7080) with only one signal, i.e., oxygen saturation. Methods based on deep learning outperform other methods by a large margin. All deep learning methods achieve nearly the same maximum classification performance even when they have very different architectures and sizes. When jointly accounting for classification performance, resource consumption and the ability to achieve with less training data high classification performance, we find that convolutional neural networks substantially outperform the other classifiers.
{"title":"Machine Learning for Sleep Apnea Detection with Unattended Sleep Monitoring at Home","authors":"Stein Kristiansen, K. Nikolaidis, T. Plagemann, V. Goebel, G. Traaen, B. Øverland, L. Aakerøy, T. Hunt, J. P. Loennechen, S. Steinshamn, C. Bendz, O. Anfinsen, L. Gullestad, H. Akre","doi":"10.1145/3433987","DOIUrl":"https://doi.org/10.1145/3433987","url":null,"abstract":"Sleep apnea is a common and strongly under-diagnosed severe sleep-related respiratory disorder with periods of disrupted or reduced breathing during sleep. To diagnose sleep apnea, sleep data are collected with either polysomnography or polygraphy and scored by a sleep expert. We investigate in this work the use of supervised machine learning to automate the analysis of polygraphy data from the A3 study containing more than 7,400 hours of sleep monitoring data from 579 patients. We conduct a systematic comparative study of classification performance and resource use with different combinations of 27 classifiers and four sleep signals. The classifiers achieve up to 0.8941 accuracy (kappa: 0.7877) when using all four signal types simultaneously and up to 0.8543 accuracy (kappa: 0.7080) with only one signal, i.e., oxygen saturation. Methods based on deep learning outperform other methods by a large margin. All deep learning methods achieve nearly the same maximum classification performance even when they have very different architectures and sizes. When jointly accounting for classification performance, resource consumption and the ability to achieve with less training data high classification performance, we find that convolutional neural networks substantially outperform the other classifiers.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3433987","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43560441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable health-tracking consumer products are gaining popularity, including smart watches, fitness trackers, smart clothing, and head-mounted devices. These wearable devices promise new opportunities for the study of health-related behavior, for tracking of chronic conditions, and for innovative interventions in support of health and wellness. Next-generation wearable technologies have the potential to transform today’s hospitalcentered healthcare practices into proactive, individualized care. Although it seems new technologies enter the marketplace every week, there is still a great need for research on the development of sensors, sensor-data analytics, wearable interaction modalities, and more. In this special issue, we sought to assemble a set of articles addressing novel computational research related to any aspect of the design or use of wearables in medicine and health, including wearable hardware design, AI and data analytics algorithms, human-device interaction, security/privacy, and novel applications. Here, in Part 2 of a two-part collection of articles on this topic, we are pleased to share four articles about the use of wearables for skill assessment, activity recognition, mood recognition, and deep learning. In the first article, Generalized and Efficient Skill Assessment from IMU Data with Applications in Gymnastics and Medical Training, Khan et al. propose a new framework for skill assessment that generalizes across application domains and can be deployed for different near-real-time applications. The effectiveness and efficiency of the proposed approach is validated in gymnastics and surgical skill training of medical students. In the next article, Privacy-preserving IoT Framework for Activity Recognition in Personal Healthcare Monitoring, Jourdan et al. propose a framework that uses machine learning to recognize the user activity, in the context of personal healthcare monitoring, while limiting the risk of users’ re-identification from biometric patterns that characterize an individual. Their solution trades off privacy and utility with a slight decrease of utility (9% drop in accuracy) against a large increase of privacy. Next, the article Perception Clusters: Automated Mood Recognition using a Novel Cluster-driven Modelling System proposes a mood-recognition system that groups individuals in “perception clusters” based on their physiological signals. This method can provide inference results that are more accurate than generalized models, without the need for the extensive training data necessary to build personalized models. In this regard, the approach is a compromise between generalized and personalized models for automated mood recognition (AMR). Finally, in an article about the Ensemble Deep Learning on Wearables Using Small Datasets, Ngu et al. describe an in-depth experimental study of Ensemble Deep Learning techniques on small time-series datasets generated by wearable devices, which is motivated by the fact that there
{"title":"Introduction to the Special Issue on the Wearable Technologies for Smart Health, Part 2","authors":"D. Kotz, G. Xing","doi":"10.1145/3442350","DOIUrl":"https://doi.org/10.1145/3442350","url":null,"abstract":"Wearable health-tracking consumer products are gaining popularity, including smart watches, fitness trackers, smart clothing, and head-mounted devices. These wearable devices promise new opportunities for the study of health-related behavior, for tracking of chronic conditions, and for innovative interventions in support of health and wellness. Next-generation wearable technologies have the potential to transform today’s hospitalcentered healthcare practices into proactive, individualized care. Although it seems new technologies enter the marketplace every week, there is still a great need for research on the development of sensors, sensor-data analytics, wearable interaction modalities, and more. In this special issue, we sought to assemble a set of articles addressing novel computational research related to any aspect of the design or use of wearables in medicine and health, including wearable hardware design, AI and data analytics algorithms, human-device interaction, security/privacy, and novel applications. Here, in Part 2 of a two-part collection of articles on this topic, we are pleased to share four articles about the use of wearables for skill assessment, activity recognition, mood recognition, and deep learning. In the first article, Generalized and Efficient Skill Assessment from IMU Data with Applications in Gymnastics and Medical Training, Khan et al. propose a new framework for skill assessment that generalizes across application domains and can be deployed for different near-real-time applications. The effectiveness and efficiency of the proposed approach is validated in gymnastics and surgical skill training of medical students. In the next article, Privacy-preserving IoT Framework for Activity Recognition in Personal Healthcare Monitoring, Jourdan et al. propose a framework that uses machine learning to recognize the user activity, in the context of personal healthcare monitoring, while limiting the risk of users’ re-identification from biometric patterns that characterize an individual. Their solution trades off privacy and utility with a slight decrease of utility (9% drop in accuracy) against a large increase of privacy. Next, the article Perception Clusters: Automated Mood Recognition using a Novel Cluster-driven Modelling System proposes a mood-recognition system that groups individuals in “perception clusters” based on their physiological signals. This method can provide inference results that are more accurate than generalized models, without the need for the extensive training data necessary to build personalized models. In this regard, the approach is a compromise between generalized and personalized models for automated mood recognition (AMR). Finally, in an article about the Ensemble Deep Learning on Wearables Using Small Datasets, Ngu et al. describe an in-depth experimental study of Ensemble Deep Learning techniques on small time-series datasets generated by wearable devices, which is motivated by the fact that there","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 2"},"PeriodicalIF":0.0,"publicationDate":"2021-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3442350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46137824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable systems have unlocked new sensing paradigms in various applications such as human activity recognition, which can enhance effectiveness of mobile health applications. Current systems using wearables are not capable of understanding their surroundings, which limits their sensing capabilities. For instance, distinguishing certain activities such as attending a meeting or class, which have similar motion patterns but happen in different contexts, is challenging by merely using wearable motion sensors. This article focuses on understanding user's surroundings, i.e., environmental context, to enhance capability of wearables, with focus on detecting complex activities of daily living (ADL). We develop a methodology to automatically detect the context using passively observable information broadcasted by devices in users’ locale. This system does not require specific infrastructure or additional hardware. We develop a pattern extraction algorithm and probabilistic mapping between the context and activities to reduce the set of probable outcomes. The proposed system contains a general ADL classifier working with motion sensors, learns personalized context, and uses that to reduce the search space of activities to those that occur within a certain context. We collected real-world data of complex ADLs and by narrowing the search space with context, we improve average F1-score from 0.72 to 0.80.
{"title":"Data-driven Context Detection Leveraging Passively Sensed Nearables for Recognizing Complex Activities of Daily Living","authors":"A. Akbari, Reese Grimsley, R. Jafari","doi":"10.1145/3428664","DOIUrl":"https://doi.org/10.1145/3428664","url":null,"abstract":"Wearable systems have unlocked new sensing paradigms in various applications such as human activity recognition, which can enhance effectiveness of mobile health applications. Current systems using wearables are not capable of understanding their surroundings, which limits their sensing capabilities. For instance, distinguishing certain activities such as attending a meeting or class, which have similar motion patterns but happen in different contexts, is challenging by merely using wearable motion sensors. This article focuses on understanding user's surroundings, i.e., environmental context, to enhance capability of wearables, with focus on detecting complex activities of daily living (ADL). We develop a methodology to automatically detect the context using passively observable information broadcasted by devices in users’ locale. This system does not require specific infrastructure or additional hardware. We develop a pattern extraction algorithm and probabilistic mapping between the context and activities to reduce the set of probable outcomes. The proposed system contains a general ADL classifier working with motion sensors, learns personalized context, and uses that to reduce the search space of activities to those that occur within a certain context. We collected real-world data of complex ADLs and by narrowing the search space with context, we improve average F1-score from 0.72 to 0.80.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2021-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3428664","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45105424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2020-12-30DOI: 10.1145/3417958
Nathan C Hurley, Erica S Spatz, Harlan M Krumholz, Roozbeh Jafari, Bobak J Mortazavi
Cardiovascular disorders cause nearly one in three deaths in the United States. Short- and long-term care for these disorders is often determined in short-term settings. However, these decisions are made with minimal longitudinal and long-term data. To overcome this bias towards data from acute care settings, improved longitudinal monitoring for cardiovascular patients is needed. Longitudinal monitoring provides a more comprehensive picture of patient health, allowing for informed decision making. This work surveys sensing and machine learning in the field of remote health monitoring for cardiovascular disorders. We highlight three needs in the design of new smart health technologies: (1) need for sensing technologies that track longitudinal trends of the cardiovascular disorder despite infrequent, noisy, or missing data measurements; (2) need for new analytic techniques designed in a longitudinal, continual fashion to aid in the development of new risk prediction techniques and in tracking disease progression; and (3) need for personalized and interpretable machine learning techniques, allowing for advancements in clinical decision making. We highlight these needs based upon the current state of the art in smart health technologies and analytics. We then discuss opportunities in addressing these needs for development of smart health technologies for the field of cardiovascular disorders and care.
{"title":"A Survey of Challenges and Opportunities in Sensing and Analytics for Risk Factors of Cardiovascular Disorders.","authors":"Nathan C Hurley, Erica S Spatz, Harlan M Krumholz, Roozbeh Jafari, Bobak J Mortazavi","doi":"10.1145/3417958","DOIUrl":"10.1145/3417958","url":null,"abstract":"<p><p>Cardiovascular disorders cause nearly one in three deaths in the United States. Short- and long-term care for these disorders is often determined in short-term settings. However, these decisions are made with minimal longitudinal and long-term data. To overcome this bias towards data from acute care settings, improved longitudinal monitoring for cardiovascular patients is needed. Longitudinal monitoring provides a more comprehensive picture of patient health, allowing for informed decision making. This work surveys sensing and machine learning in the field of remote health monitoring for cardiovascular disorders. We highlight three needs in the design of new smart health technologies: (1) need for sensing technologies that track longitudinal trends of the cardiovascular disorder despite infrequent, noisy, or missing data measurements; (2) need for new analytic techniques designed in a longitudinal, continual fashion to aid in the development of new risk prediction techniques and in tracking disease progression; and (3) need for personalized and interpretable machine learning techniques, allowing for advancements in clinical decision making. We highlight these needs based upon the current state of the art in smart health technologies and analytics. We then discuss opportunities in addressing these needs for development of smart health technologies for the field of cardiovascular disorders and care.</p>","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8320445/pdf/nihms-1670305.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39274866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taylor R. Mauldin, A. Ngu, V. Metsis, Marc E. Canby
This article presents an in-depth experimental study of Ensemble Deep Learning techniques on small datasets for the analysis of time-series data generated by wearable devices. Deep Learning networks generally require large datasets for training. In some health care applications, such as the real-time smartwatch-based fall detection, there are no publicly available, large, annotated datasets that can be used for training, due to the nature of the problem (i.e., a fall is not a common event). We conducted a series of offline experiments using two different datasets of simulated falls for training various ensemble models. Our offline experimental results show that an ensemble of Recurrent Neural Network (RNN) models, combined by the stacking ensemble technique, outperforms a single RNN model trained on the same data samples. Nonetheless, fall detection models trained on simulated falls and activities of daily living performed by test subjects in a controlled environment, suffer from low precision due to high false-positive rates. In this work, through a set of real-world experiments, we demonstrate that the low precision can be mitigated via the collection of false-positive feedback by the end-users. The final Ensemble RNN model, after re-training with real-world user archived data and feedback, achieved a significantly higher precision without reducing much of the recall in a real-world setting.
{"title":"Ensemble Deep Learning on Wearables Using Small Datasets","authors":"Taylor R. Mauldin, A. Ngu, V. Metsis, Marc E. Canby","doi":"10.1145/3428666","DOIUrl":"https://doi.org/10.1145/3428666","url":null,"abstract":"This article presents an in-depth experimental study of Ensemble Deep Learning techniques on small datasets for the analysis of time-series data generated by wearable devices. Deep Learning networks generally require large datasets for training. In some health care applications, such as the real-time smartwatch-based fall detection, there are no publicly available, large, annotated datasets that can be used for training, due to the nature of the problem (i.e., a fall is not a common event). We conducted a series of offline experiments using two different datasets of simulated falls for training various ensemble models. Our offline experimental results show that an ensemble of Recurrent Neural Network (RNN) models, combined by the stacking ensemble technique, outperforms a single RNN model trained on the same data samples. Nonetheless, fall detection models trained on simulated falls and activities of daily living performed by test subjects in a controlled environment, suffer from low precision due to high false-positive rates. In this work, through a set of real-world experiments, we demonstrate that the low precision can be mitigated via the collection of false-positive feedback by the end-users. The final Ensemble RNN model, after re-training with real-world user archived data and feedback, achieved a significantly higher precision without reducing much of the recall in a real-world setting.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 30"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3428666","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45821248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aftab Khan, Alexandros Zenonos, G. Kalogridis, Yaowei Wang, Stefanos Vatsikas, M. Sooriyabandara
Automated mood recognition has been studied in recent times with great emphasis on stress in particular. Other affective states are also of great importance, as studying them can help in understanding human behaviours in more detail. Most of the studies conducted in the realisation of an automated system that is capable of recognising human moods have established that mood is personal—that is, mood perception differs amongst individuals. Previous machine learning--based frameworks confirm this hypothesis, with personalised models almost always outperforming the generalised methods. In this article, we propose a novel system for grouping individuals in what we refer to as “perception clusters” based on their physiological signals. We evaluate perception clusters with a trial of nine users in a work environment, recording physiological and activity data for at least 10 days. Our results reveal no significant difference in performance with respect to a personalised approach and that our method performs equally better against traditional generalised methods. Such an approach significantly reduces computational requirements that are otherwise necessary for personalised approaches requiring individual models developed separately for each user. Further, perception clusters manifest a direction towards semi-supervised affective modelling in which individual perceptions are inferred from the data.
{"title":"Perception Clusters","authors":"Aftab Khan, Alexandros Zenonos, G. Kalogridis, Yaowei Wang, Stefanos Vatsikas, M. Sooriyabandara","doi":"10.1145/3422819","DOIUrl":"https://doi.org/10.1145/3422819","url":null,"abstract":"Automated mood recognition has been studied in recent times with great emphasis on stress in particular. Other affective states are also of great importance, as studying them can help in understanding human behaviours in more detail. Most of the studies conducted in the realisation of an automated system that is capable of recognising human moods have established that mood is personal—that is, mood perception differs amongst individuals. Previous machine learning--based frameworks confirm this hypothesis, with personalised models almost always outperforming the generalised methods. In this article, we propose a novel system for grouping individuals in what we refer to as “perception clusters” based on their physiological signals. We evaluate perception clusters with a trial of nine users in a work environment, recording physiological and activity data for at least 10 days. Our results reveal no significant difference in performance with respect to a personalised approach and that our method performs equally better against traditional generalised methods. Such an approach significantly reduces computational requirements that are otherwise necessary for personalised approaches requiring individual models developed separately for each user. Further, perception clusters manifest a direction towards semi-supervised affective modelling in which individual perceptions are inferred from the data.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 16"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422819","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44048630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}