Mental state assessment by analysing user-generated content is a field that has recently attracted considerable attention. Today, many people are increasingly utilising online social media platforms to share their feelings and moods. This provides a unique opportunity for researchers and health practitioners to proactively identify linguistic markers or patterns that correlate with mental disorders such as depression, schizophrenia or suicide behaviour. This survey describes and reviews the approaches that have been proposed for mental state assessment and identification of disorders using online digital records. The presented studies are organised according to the assessment technology and the feature extraction process conducted. We also present a series of studies which explore different aspects of the language and behaviour of individuals suffering from mental disorders, and discuss various aspects related to the development of experimental frameworks. Furthermore, ethical considerations regarding the treatment of individuals’ data are outlined. The main contributions of this survey are a comprehensive analysis of the proposed approaches for online mental state assessment on social media, a structured categorisation of the methods according to their design principles, lessons learnt over the years and a discussion on possible avenues for future research.
{"title":"A Survey of Computational Methods for Online Mental State Assessment on Social Media","authors":"E. A. Ríssola, D. Losada, F. Crestani","doi":"10.1145/3437259","DOIUrl":"https://doi.org/10.1145/3437259","url":null,"abstract":"Mental state assessment by analysing user-generated content is a field that has recently attracted considerable attention. Today, many people are increasingly utilising online social media platforms to share their feelings and moods. This provides a unique opportunity for researchers and health practitioners to proactively identify linguistic markers or patterns that correlate with mental disorders such as depression, schizophrenia or suicide behaviour. This survey describes and reviews the approaches that have been proposed for mental state assessment and identification of disorders using online digital records. The presented studies are organised according to the assessment technology and the feature extraction process conducted. We also present a series of studies which explore different aspects of the language and behaviour of individuals suffering from mental disorders, and discuss various aspects related to the development of experimental frameworks. Furthermore, ethical considerations regarding the treatment of individuals’ data are outlined. The main contributions of this survey are a comprehensive analysis of the proposed approaches for online mental state assessment on social media, a structured categorisation of the methods according to their design principles, lessons learnt over the years and a discussion on possible avenues for future research.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 31"},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3437259","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42482012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Minot, N. Cheney, Marc E. Maier, Danne C. Elbers, C. Danforth, P. Dodds
Medical systems in general, and patient treatment decisions and outcomes in particular, can be affected by bias based on gender and other demographic elements. As language models are increasingly applied to medicine, there is a growing interest in building algorithmic fairness into processes impacting patient care. Much of the work addressing this question has focused on biases encoded in language models—statistical estimates of the relationships between concepts derived from distant reading of corpora. Building on this work, we investigate how differences in gender-specific word frequency distributions and language models interact with regards to bias. We identify and remove gendered language from two clinical-note datasets and describe a new debiasing procedure using BERT-based gender classifiers. We show minimal degradation in health condition classification tasks for low- to medium-levels of dataset bias removal via data augmentation. Finally, we compare the bias semantically encoded in the language models with the bias empirically observed in health records. This work outlines an interpretable approach for using data augmentation to identify and reduce biases in natural language processing pipelines.
{"title":"Interpretable Bias Mitigation for Textual Data: Reducing Genderization in Patient Notes While Maintaining Classification Performance","authors":"J. Minot, N. Cheney, Marc E. Maier, Danne C. Elbers, C. Danforth, P. Dodds","doi":"10.1145/3524887","DOIUrl":"https://doi.org/10.1145/3524887","url":null,"abstract":"Medical systems in general, and patient treatment decisions and outcomes in particular, can be affected by bias based on gender and other demographic elements. As language models are increasingly applied to medicine, there is a growing interest in building algorithmic fairness into processes impacting patient care. Much of the work addressing this question has focused on biases encoded in language models—statistical estimates of the relationships between concepts derived from distant reading of corpora. Building on this work, we investigate how differences in gender-specific word frequency distributions and language models interact with regards to bias. We identify and remove gendered language from two clinical-note datasets and describe a new debiasing procedure using BERT-based gender classifiers. We show minimal degradation in health condition classification tasks for low- to medium-levels of dataset bias removal via data augmentation. Finally, we compare the bias semantically encoded in the language models with the bias empirically observed in health records. This work outlines an interpretable approach for using data augmentation to identify and reduce biases in natural language processing pipelines.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"240 1","pages":"1 - 41"},"PeriodicalIF":0.0,"publicationDate":"2021-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41264692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md Momin Al Aziz, Shahin Kamali, N. Mohammed, Xiaoqian Jiang
Digitization of healthcare records contributed to a large volume of functional scientific data that can help researchers to understand the behaviour of many diseases. However, the privacy implications of this data, particularly genomics data, have surfaced recently as the collection, dissemination, and analysis of human genomics data is highly sensitive. There have been multiple privacy attacks relying on the uniqueness of the human genome that reveals a participant or a certain group’s presence in a dataset. Therefore, the current data sharing policies have ruled out any public dissemination and adopted precautionary measures prior to genomics data release, which hinders timely scientific innovation. In this article, we investigate an approach that only releases the statistics from genomic data rather than the whole dataset and propose a generalized Differentially Private mechanism for Genome-wide Association Studies (GWAS). Our method provides a quantifiable privacy guarantee that adds noise to the intermediate outputs but ensures satisfactory accuracy of the private results. Furthermore, the proposed method offers multiple adjustable parameters that the data owners can set based on the optimal privacy requirements. These variables are presented as equalizers that balance between the privacy and utility of the GWAS. The method also incorporates Online Bin Packing technique [1], which further bounds the privacy loss linearly, growing according to the number of open bins and scales with the incoming queries. Finally, we implemented and benchmarked our approach using seven different GWAS studies to test the performance of the proposed methods. The experimental results demonstrate that for 1,000 arbitrary online queries, our algorithms are more than 80% accurate with reasonable privacy loss and exceed the state-of-the-art approaches on multiple studies (i.e., EigenStrat, LMM, TDT).
{"title":"Online Algorithm for Differentially Private Genome-wide Association Studies","authors":"Md Momin Al Aziz, Shahin Kamali, N. Mohammed, Xiaoqian Jiang","doi":"10.1145/3431504","DOIUrl":"https://doi.org/10.1145/3431504","url":null,"abstract":"Digitization of healthcare records contributed to a large volume of functional scientific data that can help researchers to understand the behaviour of many diseases. However, the privacy implications of this data, particularly genomics data, have surfaced recently as the collection, dissemination, and analysis of human genomics data is highly sensitive. There have been multiple privacy attacks relying on the uniqueness of the human genome that reveals a participant or a certain group’s presence in a dataset. Therefore, the current data sharing policies have ruled out any public dissemination and adopted precautionary measures prior to genomics data release, which hinders timely scientific innovation. In this article, we investigate an approach that only releases the statistics from genomic data rather than the whole dataset and propose a generalized Differentially Private mechanism for Genome-wide Association Studies (GWAS). Our method provides a quantifiable privacy guarantee that adds noise to the intermediate outputs but ensures satisfactory accuracy of the private results. Furthermore, the proposed method offers multiple adjustable parameters that the data owners can set based on the optimal privacy requirements. These variables are presented as equalizers that balance between the privacy and utility of the GWAS. The method also incorporates Online Bin Packing technique [1], which further bounds the privacy loss linearly, growing according to the number of open bins and scales with the incoming queries. Finally, we implemented and benchmarked our approach using seven different GWAS studies to test the performance of the proposed methods. The experimental results demonstrate that for 1,000 arbitrary online queries, our algorithms are more than 80% accurate with reasonable privacy loss and exceed the state-of-the-art approaches on multiple studies (i.e., EigenStrat, LMM, TDT).","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":" ","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2021-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3431504","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46636696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Preventable adverse events as a result of medical errors present a growing concern in the healthcare system. As drug-drug interactions (DDIs) may lead to preventable adverse events, being able to extract DDIs from drug labels into a machine-processable form is an important step toward effective dissemination of drug safety information. Herein, we tackle the problem of jointly extracting mentions of drugs and their interactions, including interaction outcome, from drug labels. Our deep learning approach entails composing various intermediate representations, including graph-based context derived using graph convolutions (GCs) with a novel attention-based gating mechanism (holistically called GCA), which are combined in meaningful ways to predict on all subtasks jointly. Our model is trained and evaluated on the 2018 TAC DDI corpus. Our GCA model in conjunction with transfer learning performs at 39.20% F1 and 26.09% F1 on entity recognition (ER) and relation extraction (RE), respectively, on the first official test set and at 45.30% F1 and 27.87% F1 on ER and RE, respectively, on the second official test set. These updated results lead to improvements over our prior best by up to 6 absolute F1 points. After controlling for available training data, the proposed model exhibits state-of-the-art performance for this task.
{"title":"Attention-Gated Graph Convolutions for Extracting Drug Interaction Information from Drug Labels.","authors":"Tung Tran, Ramakanth Kavuluru, Halil Kilicoglu","doi":"10.1145/3423209","DOIUrl":"10.1145/3423209","url":null,"abstract":"<p><p>Preventable adverse events as a result of medical errors present a growing concern in the healthcare system. As drug-drug interactions (DDIs) may lead to preventable adverse events, being able to extract DDIs from drug labels into a machine-processable form is an important step toward effective dissemination of drug safety information. Herein, we tackle the problem of jointly extracting mentions of drugs and their interactions, including interaction <i>outcome</i>, from drug labels. Our deep learning approach entails composing various intermediate representations, including graph-based context derived using graph convolutions (GCs) with a novel attention-based gating mechanism (holistically called GCA), which are combined in meaningful ways to predict on all subtasks jointly. Our model is trained and evaluated on the 2018 TAC DDI corpus. Our GCA model in conjunction with transfer learning performs at 39.20% F1 and 26.09% F1 on entity recognition (ER) and relation extraction (RE), respectively, on the first official test set and at 45.30% F1 and 27.87% F1 on ER and RE, respectively, on the second official test set. These updated results lead to improvements over our prior best by up to 6 absolute F1 points. After controlling for available training data, the proposed model exhibits state-of-the-art performance for this task.</p>","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3423209","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39453024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Yarlagadda, D. M. Montserrat, D. Güera, C. Boushey, D. Kerr, F. Zhu
Advances in image-based dietary assessment methods have allowed nutrition professionals and researchers to improve the accuracy of dietary assessment, where images of food consumed are captured using smartphones or wearable devices. These images are then analyzed using computer vision methods to estimate energy and nutrition content of the foods. Food image segmentation, which determines the regions in an image where foods are located, plays an important role in this process. Current methods are data dependent and thus cannot generalize well for different food types. To address this problem, we propose a class-agnostic food image segmentation method. Our method uses a pair of eating scene images, one before starting eating and one after eating is completed. Using information from both the before and after eating images, we can segment food images by finding the salient missing objects without any prior information about the food class. We model a paradigm of top-down saliency that guides the attention of the human visual system based on a task to find the salient missing objects in a pair of images. Our method is validated on food images collected from a dietary study that showed promising results.
{"title":"Saliency-Aware Class-Agnostic Food Image Segmentation","authors":"S. Yarlagadda, D. M. Montserrat, D. Güera, C. Boushey, D. Kerr, F. Zhu","doi":"10.1145/3440274","DOIUrl":"https://doi.org/10.1145/3440274","url":null,"abstract":"Advances in image-based dietary assessment methods have allowed nutrition professionals and researchers to improve the accuracy of dietary assessment, where images of food consumed are captured using smartphones or wearable devices. These images are then analyzed using computer vision methods to estimate energy and nutrition content of the foods. Food image segmentation, which determines the regions in an image where foods are located, plays an important role in this process. Current methods are data dependent and thus cannot generalize well for different food types. To address this problem, we propose a class-agnostic food image segmentation method. Our method uses a pair of eating scene images, one before starting eating and one after eating is completed. Using information from both the before and after eating images, we can segment food images by finding the salient missing objects without any prior information about the food class. We model a paradigm of top-down saliency that guides the attention of the human visual system based on a task to find the salient missing objects in a pair of images. Our method is validated on food images collected from a dietary study that showed promising results.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 17"},"PeriodicalIF":0.0,"publicationDate":"2021-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3440274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45284937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While a large number of causal inference models for estimating individualized treatment effects (ITE) have been developed, selecting the best one poses a unique challenge, since the counterfactuals are never observed. The problem is challenged further in the unsupervised domain adaptation (UDA) setting where we have access to labeled samples in the source domain but desire selecting an ITE model that achieves good performance on a target domain where only unlabeled samples are available. Existing selection techniques for UDA are designed for predictive models and are sub-optimal for causal inference because they (1) do not account for the missing counterfactuals and (2) only examine the discriminative density ratios between the input covariates in the source and target domain and do not factor in the model’s predictions in the target domain. We leverage the invariance of causal structures across domains to introduce a novel model selection metric specifically designed for ITE models under UDA. We propose selecting models whose predictions of the effects of interventions satisfy invariant causal structures in the target domain. Experimentally, our method selects ITE models that are more robust to covariate shifts on a variety of datasets, including estimating the effect of ventilation in COVID-19 patients.
{"title":"Selecting Treatment Effects Models for Domain Adaptation Using Causal Knowledge","authors":"Trent Kyono, I. Bica, Z. Qian, M. van der Schaar","doi":"10.1145/3587695","DOIUrl":"https://doi.org/10.1145/3587695","url":null,"abstract":"While a large number of causal inference models for estimating individualized treatment effects (ITE) have been developed, selecting the best one poses a unique challenge, since the counterfactuals are never observed. The problem is challenged further in the unsupervised domain adaptation (UDA) setting where we have access to labeled samples in the source domain but desire selecting an ITE model that achieves good performance on a target domain where only unlabeled samples are available. Existing selection techniques for UDA are designed for predictive models and are sub-optimal for causal inference because they (1) do not account for the missing counterfactuals and (2) only examine the discriminative density ratios between the input covariates in the source and target domain and do not factor in the model’s predictions in the target domain. We leverage the invariance of causal structures across domains to introduce a novel model selection metric specifically designed for ITE models under UDA. We propose selecting models whose predictions of the effects of interventions satisfy invariant causal structures in the target domain. Experimentally, our method selects ITE models that are more robust to covariate shifts on a variety of datasets, including estimating the effect of ventilation in COVID-19 patients.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"4 1","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2021-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42443000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vital sign (e.g., respiration rate) monitoring has become increasingly more important because it offers useful clues about medical conditions such as sleep disorders. There is a compelling need for technologies that enable contact-free and easy deployment of vital sign monitoring over an extended period of time for healthcare. In this article, we present a SonarBeat system to leverage a phase-based active sonar to monitor respiration rates with smartphones. We provide a sonar phase analysis and discuss the technical challenges for respiration rate estimation utilizing an inaudible sound signal. Moreover, we design and implement the SonarBeat system, with components including signal generation, data extraction, received signal preprocessing, and breathing rate estimation with Android smartphones. Our extensive experimental results validate the superior performance of SonarBeat in different indoor environment settings.
{"title":"Smartphone Sonar-Based Contact-Free Respiration Rate Monitoring","authors":"Xuyu Wang, Runze Huang, Chao Yang, S. Mao","doi":"10.1145/3436822","DOIUrl":"https://doi.org/10.1145/3436822","url":null,"abstract":"Vital sign (e.g., respiration rate) monitoring has become increasingly more important because it offers useful clues about medical conditions such as sleep disorders. There is a compelling need for technologies that enable contact-free and easy deployment of vital sign monitoring over an extended period of time for healthcare. In this article, we present a SonarBeat system to leverage a phase-based active sonar to monitor respiration rates with smartphones. We provide a sonar phase analysis and discuss the technical challenges for respiration rate estimation utilizing an inaudible sound signal. Moreover, we design and implement the SonarBeat system, with components including signal generation, data extraction, received signal preprocessing, and breathing rate estimation with Android smartphones. Our extensive experimental results validate the superior performance of SonarBeat in different indoor environment settings.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3436822","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43394470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stein Kristiansen, K. Nikolaidis, T. Plagemann, V. Goebel, G. Traaen, B. Øverland, L. Aakerøy, T. Hunt, J. P. Loennechen, S. Steinshamn, C. Bendz, O. Anfinsen, L. Gullestad, H. Akre
Sleep apnea is a common and strongly under-diagnosed severe sleep-related respiratory disorder with periods of disrupted or reduced breathing during sleep. To diagnose sleep apnea, sleep data are collected with either polysomnography or polygraphy and scored by a sleep expert. We investigate in this work the use of supervised machine learning to automate the analysis of polygraphy data from the A3 study containing more than 7,400 hours of sleep monitoring data from 579 patients. We conduct a systematic comparative study of classification performance and resource use with different combinations of 27 classifiers and four sleep signals. The classifiers achieve up to 0.8941 accuracy (kappa: 0.7877) when using all four signal types simultaneously and up to 0.8543 accuracy (kappa: 0.7080) with only one signal, i.e., oxygen saturation. Methods based on deep learning outperform other methods by a large margin. All deep learning methods achieve nearly the same maximum classification performance even when they have very different architectures and sizes. When jointly accounting for classification performance, resource consumption and the ability to achieve with less training data high classification performance, we find that convolutional neural networks substantially outperform the other classifiers.
{"title":"Machine Learning for Sleep Apnea Detection with Unattended Sleep Monitoring at Home","authors":"Stein Kristiansen, K. Nikolaidis, T. Plagemann, V. Goebel, G. Traaen, B. Øverland, L. Aakerøy, T. Hunt, J. P. Loennechen, S. Steinshamn, C. Bendz, O. Anfinsen, L. Gullestad, H. Akre","doi":"10.1145/3433987","DOIUrl":"https://doi.org/10.1145/3433987","url":null,"abstract":"Sleep apnea is a common and strongly under-diagnosed severe sleep-related respiratory disorder with periods of disrupted or reduced breathing during sleep. To diagnose sleep apnea, sleep data are collected with either polysomnography or polygraphy and scored by a sleep expert. We investigate in this work the use of supervised machine learning to automate the analysis of polygraphy data from the A3 study containing more than 7,400 hours of sleep monitoring data from 579 patients. We conduct a systematic comparative study of classification performance and resource use with different combinations of 27 classifiers and four sleep signals. The classifiers achieve up to 0.8941 accuracy (kappa: 0.7877) when using all four signal types simultaneously and up to 0.8543 accuracy (kappa: 0.7080) with only one signal, i.e., oxygen saturation. Methods based on deep learning outperform other methods by a large margin. All deep learning methods achieve nearly the same maximum classification performance even when they have very different architectures and sizes. When jointly accounting for classification performance, resource consumption and the ability to achieve with less training data high classification performance, we find that convolutional neural networks substantially outperform the other classifiers.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 25"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3433987","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43560441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable health-tracking consumer products are gaining popularity, including smart watches, fitness trackers, smart clothing, and head-mounted devices. These wearable devices promise new opportunities for the study of health-related behavior, for tracking of chronic conditions, and for innovative interventions in support of health and wellness. Next-generation wearable technologies have the potential to transform today’s hospitalcentered healthcare practices into proactive, individualized care. Although it seems new technologies enter the marketplace every week, there is still a great need for research on the development of sensors, sensor-data analytics, wearable interaction modalities, and more. In this special issue, we sought to assemble a set of articles addressing novel computational research related to any aspect of the design or use of wearables in medicine and health, including wearable hardware design, AI and data analytics algorithms, human-device interaction, security/privacy, and novel applications. Here, in Part 2 of a two-part collection of articles on this topic, we are pleased to share four articles about the use of wearables for skill assessment, activity recognition, mood recognition, and deep learning. In the first article, Generalized and Efficient Skill Assessment from IMU Data with Applications in Gymnastics and Medical Training, Khan et al. propose a new framework for skill assessment that generalizes across application domains and can be deployed for different near-real-time applications. The effectiveness and efficiency of the proposed approach is validated in gymnastics and surgical skill training of medical students. In the next article, Privacy-preserving IoT Framework for Activity Recognition in Personal Healthcare Monitoring, Jourdan et al. propose a framework that uses machine learning to recognize the user activity, in the context of personal healthcare monitoring, while limiting the risk of users’ re-identification from biometric patterns that characterize an individual. Their solution trades off privacy and utility with a slight decrease of utility (9% drop in accuracy) against a large increase of privacy. Next, the article Perception Clusters: Automated Mood Recognition using a Novel Cluster-driven Modelling System proposes a mood-recognition system that groups individuals in “perception clusters” based on their physiological signals. This method can provide inference results that are more accurate than generalized models, without the need for the extensive training data necessary to build personalized models. In this regard, the approach is a compromise between generalized and personalized models for automated mood recognition (AMR). Finally, in an article about the Ensemble Deep Learning on Wearables Using Small Datasets, Ngu et al. describe an in-depth experimental study of Ensemble Deep Learning techniques on small time-series datasets generated by wearable devices, which is motivated by the fact that there
{"title":"Introduction to the Special Issue on the Wearable Technologies for Smart Health, Part 2","authors":"D. Kotz, G. Xing","doi":"10.1145/3442350","DOIUrl":"https://doi.org/10.1145/3442350","url":null,"abstract":"Wearable health-tracking consumer products are gaining popularity, including smart watches, fitness trackers, smart clothing, and head-mounted devices. These wearable devices promise new opportunities for the study of health-related behavior, for tracking of chronic conditions, and for innovative interventions in support of health and wellness. Next-generation wearable technologies have the potential to transform today’s hospitalcentered healthcare practices into proactive, individualized care. Although it seems new technologies enter the marketplace every week, there is still a great need for research on the development of sensors, sensor-data analytics, wearable interaction modalities, and more. In this special issue, we sought to assemble a set of articles addressing novel computational research related to any aspect of the design or use of wearables in medicine and health, including wearable hardware design, AI and data analytics algorithms, human-device interaction, security/privacy, and novel applications. Here, in Part 2 of a two-part collection of articles on this topic, we are pleased to share four articles about the use of wearables for skill assessment, activity recognition, mood recognition, and deep learning. In the first article, Generalized and Efficient Skill Assessment from IMU Data with Applications in Gymnastics and Medical Training, Khan et al. propose a new framework for skill assessment that generalizes across application domains and can be deployed for different near-real-time applications. The effectiveness and efficiency of the proposed approach is validated in gymnastics and surgical skill training of medical students. In the next article, Privacy-preserving IoT Framework for Activity Recognition in Personal Healthcare Monitoring, Jourdan et al. propose a framework that uses machine learning to recognize the user activity, in the context of personal healthcare monitoring, while limiting the risk of users’ re-identification from biometric patterns that characterize an individual. Their solution trades off privacy and utility with a slight decrease of utility (9% drop in accuracy) against a large increase of privacy. Next, the article Perception Clusters: Automated Mood Recognition using a Novel Cluster-driven Modelling System proposes a mood-recognition system that groups individuals in “perception clusters” based on their physiological signals. This method can provide inference results that are more accurate than generalized models, without the need for the extensive training data necessary to build personalized models. In this regard, the approach is a compromise between generalized and personalized models for automated mood recognition (AMR). Finally, in an article about the Ensemble Deep Learning on Wearables Using Small Datasets, Ngu et al. describe an in-depth experimental study of Ensemble Deep Learning techniques on small time-series datasets generated by wearable devices, which is motivated by the fact that there","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 2"},"PeriodicalIF":0.0,"publicationDate":"2021-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3442350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46137824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable systems have unlocked new sensing paradigms in various applications such as human activity recognition, which can enhance effectiveness of mobile health applications. Current systems using wearables are not capable of understanding their surroundings, which limits their sensing capabilities. For instance, distinguishing certain activities such as attending a meeting or class, which have similar motion patterns but happen in different contexts, is challenging by merely using wearable motion sensors. This article focuses on understanding user's surroundings, i.e., environmental context, to enhance capability of wearables, with focus on detecting complex activities of daily living (ADL). We develop a methodology to automatically detect the context using passively observable information broadcasted by devices in users’ locale. This system does not require specific infrastructure or additional hardware. We develop a pattern extraction algorithm and probabilistic mapping between the context and activities to reduce the set of probable outcomes. The proposed system contains a general ADL classifier working with motion sensors, learns personalized context, and uses that to reduce the search space of activities to those that occur within a certain context. We collected real-world data of complex ADLs and by narrowing the search space with context, we improve average F1-score from 0.72 to 0.80.
{"title":"Data-driven Context Detection Leveraging Passively Sensed Nearables for Recognizing Complex Activities of Daily Living","authors":"A. Akbari, Reese Grimsley, R. Jafari","doi":"10.1145/3428664","DOIUrl":"https://doi.org/10.1145/3428664","url":null,"abstract":"Wearable systems have unlocked new sensing paradigms in various applications such as human activity recognition, which can enhance effectiveness of mobile health applications. Current systems using wearables are not capable of understanding their surroundings, which limits their sensing capabilities. For instance, distinguishing certain activities such as attending a meeting or class, which have similar motion patterns but happen in different contexts, is challenging by merely using wearable motion sensors. This article focuses on understanding user's surroundings, i.e., environmental context, to enhance capability of wearables, with focus on detecting complex activities of daily living (ADL). We develop a methodology to automatically detect the context using passively observable information broadcasted by devices in users’ locale. This system does not require specific infrastructure or additional hardware. We develop a pattern extraction algorithm and probabilistic mapping between the context and activities to reduce the set of probable outcomes. The proposed system contains a general ADL classifier working with motion sensors, learns personalized context, and uses that to reduce the search space of activities to those that occur within a certain context. We collected real-world data of complex ADLs and by narrowing the search space with context, we improve average F1-score from 0.72 to 0.80.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2021-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3428664","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45105424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}