The increasing popularity of wearable consumer products can play a significant role in the healthcare sector. The recognition of human activities from IoT is an important building block in this context. While the analysis of the generated datastream can have many benefits from a health point of view, it can also lead to privacy threats by exposing highly sensitive information. In this article, we propose a framework that relies on machine learning to efficiently recognise the user activity, useful for personal healthcare monitoring, while limiting the risk of users re-identification from biometric patterns characterizing each individual. To achieve that, we show that features in temporal domain are useful to discriminate user activity while features in frequency domain lead to distinguish the user identity. We then design a novel protection mechanism processing the raw signal on the user’s smartphone to select relevant features for activity recognition and normalise features sensitive to re-identification. These unlinkable features are then transferred to the application server. We extensively evaluate our framework with reference datasets: Results show an accurate activity recognition (87%) while limiting the re-identification rate (33%). This represents a slight decrease of utility (9%) against a large privacy improvement (53%) compared to state-of-the-art baselines.
{"title":"Privacy-preserving IoT Framework for Activity Recognition in Personal Healthcare Monitoring","authors":"T. Jourdan, A. Boutet, A. Bahi, Carole Frindel","doi":"10.1145/3416947","DOIUrl":"https://doi.org/10.1145/3416947","url":null,"abstract":"The increasing popularity of wearable consumer products can play a significant role in the healthcare sector. The recognition of human activities from IoT is an important building block in this context. While the analysis of the generated datastream can have many benefits from a health point of view, it can also lead to privacy threats by exposing highly sensitive information. In this article, we propose a framework that relies on machine learning to efficiently recognise the user activity, useful for personal healthcare monitoring, while limiting the risk of users re-identification from biometric patterns characterizing each individual. To achieve that, we show that features in temporal domain are useful to discriminate user activity while features in frequency domain lead to distinguish the user identity. We then design a novel protection mechanism processing the raw signal on the user’s smartphone to select relevant features for activity recognition and normalise features sensitive to re-identification. These unlinkable features are then transferred to the application server. We extensively evaluate our framework with reference datasets: Results show an accurate activity recognition (87%) while limiting the re-identification rate (33%). This represents a slight decrease of utility (9%) against a large privacy improvement (53%) compared to state-of-the-art baselines.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3416947","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41965873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clinical trials are important tools to improve knowledge about the effectiveness of new treatments for all diseases, including cancers. However, studies show that fewer than 5% of cancer patients are enrolled in any type of research study or clinical trial. Although there is a wide variety of reasons for the low participation rate, we address this issue by designing a chatbot to help users determine their eligibility via interactive, two-way communication. The chatbot is supported by a user-centered classifier that uses an active deep learning approach to separate complex eligibility criteria into questions that can be easily answered by users and information that requires verification by their doctors. We collected all the available clinical trial eligibility criteria from the National Cancer Institute's website to evaluate the chatbot and the classifier. Experimental results show that the active deep learning classifier outperforms the baseline k-nearest neighbor method. In addition, an in-person experiment was conducted to evaluate the effectiveness of the chatbot. The results indicate that the participants who used the chatbot achieved better understanding about eligibility than those who used only the website. Furthermore, interfaces with chatbots were rated significantly better in terms of perceived usability, interactivity, and dialogue.
{"title":"Creating and Evaluating Chatbots as Eligibility Assistants for Clinical Trials","authors":"C. Chuan, Susan Morgan","doi":"10.1145/3403575","DOIUrl":"https://doi.org/10.1145/3403575","url":null,"abstract":"Clinical trials are important tools to improve knowledge about the effectiveness of new treatments for all diseases, including cancers. However, studies show that fewer than 5% of cancer patients are enrolled in any type of research study or clinical trial. Although there is a wide variety of reasons for the low participation rate, we address this issue by designing a chatbot to help users determine their eligibility via interactive, two-way communication. The chatbot is supported by a user-centered classifier that uses an active deep learning approach to separate complex eligibility criteria into questions that can be easily answered by users and information that requires verification by their doctors. We collected all the available clinical trial eligibility criteria from the National Cancer Institute's website to evaluate the chatbot and the classifier. Experimental results show that the active deep learning classifier outperforms the baseline k-nearest neighbor method. In addition, an in-person experiment was conducted to evaluate the effectiveness of the chatbot. The results indicate that the participants who used the chatbot achieved better understanding about eligibility than those who used only the website. Furthermore, interfaces with chatbots were rated significantly better in terms of perceived usability, interactivity, and dialogue.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3403575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47567756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aftab Khan, Sebastian Mellor, R. King, Balazs Janko, W. Harwin, R. Sherratt, I. Craddock, T. Plötz
Human activity recognition is progressing from automatically determining what a person is doing and when, to additionally analyzing the quality of these activities—typically referred to as skill assessment. In this chapter, we propose a new framework for skill assessment that generalizes across application domains and can be deployed for near-real-time applications. It is based on the notion of repeatability of activities defining skill. The analysis is based on two subsequent classification steps that analyze (1) movements or activities and (2) their qualities, that is, the actual skills of a human performing them. The first classifier is trained in either a supervised or unsupervised manner and provides confidence scores, which are then used for assessing skills. We evaluate the proposed method in two scenarios: gymnastics and surgical skill training of medical students. We demonstrate both the overall effectiveness and efficiency of the generalized assessment method, especially compared to previous work.
{"title":"Generalized and Efficient Skill Assessment from IMU Data with Applications in Gymnastics and Medical Training","authors":"Aftab Khan, Sebastian Mellor, R. King, Balazs Janko, W. Harwin, R. Sherratt, I. Craddock, T. Plötz","doi":"10.1145/3422168","DOIUrl":"https://doi.org/10.1145/3422168","url":null,"abstract":"Human activity recognition is progressing from automatically determining what a person is doing and when, to additionally analyzing the quality of these activities—typically referred to as skill assessment. In this chapter, we propose a new framework for skill assessment that generalizes across application domains and can be deployed for near-real-time applications. It is based on the notion of repeatability of activities defining skill. The analysis is based on two subsequent classification steps that analyze (1) movements or activities and (2) their qualities, that is, the actual skills of a human performing them. The first classifier is trained in either a supervised or unsupervised manner and provides confidence scores, which are then used for assessing skills. We evaluate the proposed method in two scenarios: gymnastics and surgical skill training of medical students. We demonstrate both the overall effectiveness and efficiency of the generalized assessment method, especially compared to previous work.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 21"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422168","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46052762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the widespread use of smartphones and wearable health sensors, a plethora of mobile health (mHealth) applications to track well-being, run human behavioral studies, and clinical trials have emerged in recent years. However, the design, development, and deployment of mHealth applications is challenging in many ways. To address these challenges, several generic mobile sensing frameworks have been researched in the past decade. Such frameworks assist developers and researchers in reducing the complexity, time, and cost required to build and deploy health-sensing applications. The main goal of this article is to provide the reader with an overview of the state-of-the-art of health-focused generic mobile and wearable sensing frameworks. This review gives a detailed analysis of functional and non-functional features of existing frameworks, the health studies they were used in, and the stakeholders they support. Additionally, we also analyze the historical evolution, uptake, and maintenance after the initial release. Based on this analysis, we suggest new features and opportunities for future generic mHealth sensing frameworks.
{"title":"Mobile and Wearable Sensing Frameworks for mHealth Studies and Applications","authors":"Devender Kumar, S. Jeuris, J. Bardram, N. Dragoni","doi":"10.1145/3422158","DOIUrl":"https://doi.org/10.1145/3422158","url":null,"abstract":"With the widespread use of smartphones and wearable health sensors, a plethora of mobile health (mHealth) applications to track well-being, run human behavioral studies, and clinical trials have emerged in recent years. However, the design, development, and deployment of mHealth applications is challenging in many ways. To address these challenges, several generic mobile sensing frameworks have been researched in the past decade. Such frameworks assist developers and researchers in reducing the complexity, time, and cost required to build and deploy health-sensing applications. The main goal of this article is to provide the reader with an overview of the state-of-the-art of health-focused generic mobile and wearable sensing frameworks. This review gives a detailed analysis of functional and non-functional features of existing frameworks, the health studies they were used in, and the stakeholders they support. Additionally, we also analyze the historical evolution, uptake, and maintenance after the initial release. Based on this analysis, we suggest new features and opportunities for future generic mHealth sensing frameworks.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422158","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47998709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niels van Berkel, O. Ahmad, D. Stoyanov, L. Lovat, A. Blandford
Colonoscopy, the visual inspection of the large bowel using an endoscope, offers protection against colorectal cancer by allowing for the detection and removal of pre-cancerous polyps. The literature on polyp detection shows widely varying miss rates among clinicians, with averages ranging around 22%--27%. While recent work has considered the use of AI support systems for polyp detection, how to visualise and integrate these systems into clinical practice is an open question. In this work, we explore the design of visual markers as used in an AI support system for colonoscopy. Supported by the gastroenterologists in our team, we designed seven unique visual markers and rendered them on real-life patient video footage. Through an online survey targeting relevant clinical staff (N = 36), we evaluated these designs and obtained initial insights and understanding into the way in which clinical staff envision AI to integrate in their daily work-environment. Our results provide concrete recommendations for the future deployment of AI support systems in continuous, adaptive scenarios.
{"title":"Designing Visual Markers for Continuous Artificial Intelligence Support","authors":"Niels van Berkel, O. Ahmad, D. Stoyanov, L. Lovat, A. Blandford","doi":"10.1145/3422156","DOIUrl":"https://doi.org/10.1145/3422156","url":null,"abstract":"Colonoscopy, the visual inspection of the large bowel using an endoscope, offers protection against colorectal cancer by allowing for the detection and removal of pre-cancerous polyps. The literature on polyp detection shows widely varying miss rates among clinicians, with averages ranging around 22%--27%. While recent work has considered the use of AI support systems for polyp detection, how to visualise and integrate these systems into clinical practice is an open question. In this work, we explore the design of visual markers as used in an AI support system for colonoscopy. Supported by the gastroenterologists in our team, we designed seven unique visual markers and rendered them on real-life patient video footage. Through an online survey targeting relevant clinical staff (N = 36), we evaluated these designs and obtained initial insights and understanding into the way in which clinical staff envision AI to integrate in their daily work-environment. Our results provide concrete recommendations for the future deployment of AI support systems in continuous, adaptive scenarios.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":" ","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422156","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44225886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sizhe An, Ganapati Bhat, S. Gumussoy, Ümit Y. Ogras
Human activity recognition (HAR) has increased in recent years due to its applications in mobile health monitoring, activity recognition, and patient rehabilitation. The typical approach is training a HAR classifier offline with known users and then using the same classifier for new users. However, the accuracy for new users can be low with this approach if their activity patterns are different than those in the training data. At the same time, training from scratch for new users is not feasible for mobile applications due to the high computational cost and training time. To address this issue, we propose a HAR transfer learning framework with two components. First, a representational analysis reveals common features that can transfer across users and user-specific features that need to be customized. Using this insight, we transfer the reusable portion of the offline classifier to new users and fine-tune only the rest. Our experiments with five datasets show up to 43% accuracy improvement and 66% training time reduction when compared to the baseline without using transfer learning. Furthermore, measurements on the hardware platform reveal that the power and energy consumption decreased by 43% and 68%, respectively, while achieving the same or higher accuracy as training from scratch. Our code is released for reproducibility.1
{"title":"Transfer Learning for Human Activity Recognition Using Representational Analysis of Neural Networks","authors":"Sizhe An, Ganapati Bhat, S. Gumussoy, Ümit Y. Ogras","doi":"10.1145/3563948","DOIUrl":"https://doi.org/10.1145/3563948","url":null,"abstract":"Human activity recognition (HAR) has increased in recent years due to its applications in mobile health monitoring, activity recognition, and patient rehabilitation. The typical approach is training a HAR classifier offline with known users and then using the same classifier for new users. However, the accuracy for new users can be low with this approach if their activity patterns are different than those in the training data. At the same time, training from scratch for new users is not feasible for mobile applications due to the high computational cost and training time. To address this issue, we propose a HAR transfer learning framework with two components. First, a representational analysis reveals common features that can transfer across users and user-specific features that need to be customized. Using this insight, we transfer the reusable portion of the offline classifier to new users and fine-tune only the rest. Our experiments with five datasets show up to 43% accuracy improvement and 66% training time reduction when compared to the baseline without using transfer learning. Furthermore, measurements on the hardware platform reveal that the power and energy consumption decreased by 43% and 68%, respectively, while achieving the same or higher accuracy as training from scratch. Our code is released for reproducibility.1","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"4 1","pages":"1 - 21"},"PeriodicalIF":0.0,"publicationDate":"2020-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49468295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chongyang Wang, Temitayo A. Olugbade, Akhil Mathur, A. Williams, N. Lane, N. Bianchi-Berthouze
In chronic pain rehabilitation, physiotherapists adapt physical activity to patients’ performance based on their expression of protective behavior, gradually exposing them to feared but harmless and essential everyday activities. As rehabilitation moves outside the clinic, technology should automatically detect such behavior to provide similar support. Previous works have shown the feasibility of automatic protective behavior detection (PBD) within a specific activity. In this article, we investigate the use of deep learning for PBD across activity types, using wearable motion capture and surface electromyography data collected from healthy participants and people with chronic pain. We approach the problem by continuously detecting protective behavior within an activity rather than estimating its overall presence. The best performance reaches mean F1 score of 0.82 with leave-one-subject-out cross validation. When protective behavior is modeled per activity type, performance achieves a mean F1 score of 0.77 for bend-down, 0.81 for one-leg-stand, 0.72 for sit-to-stand, 0.83 for stand-to-sit, and 0.67 for reach-forward. This performance reaches excellent level of agreement with the average experts’ rating performance suggesting potential for personalized chronic pain management at home. We analyze various parameters characterizing our approach to understand how the results could generalize to other PBD datasets and different levels of ground truth granularity.
{"title":"Chronic Pain Protective Behavior Detection with Deep Learning","authors":"Chongyang Wang, Temitayo A. Olugbade, Akhil Mathur, A. Williams, N. Lane, N. Bianchi-Berthouze","doi":"10.1145/3449068","DOIUrl":"https://doi.org/10.1145/3449068","url":null,"abstract":"In chronic pain rehabilitation, physiotherapists adapt physical activity to patients’ performance based on their expression of protective behavior, gradually exposing them to feared but harmless and essential everyday activities. As rehabilitation moves outside the clinic, technology should automatically detect such behavior to provide similar support. Previous works have shown the feasibility of automatic protective behavior detection (PBD) within a specific activity. In this article, we investigate the use of deep learning for PBD across activity types, using wearable motion capture and surface electromyography data collected from healthy participants and people with chronic pain. We approach the problem by continuously detecting protective behavior within an activity rather than estimating its overall presence. The best performance reaches mean F1 score of 0.82 with leave-one-subject-out cross validation. When protective behavior is modeled per activity type, performance achieves a mean F1 score of 0.77 for bend-down, 0.81 for one-leg-stand, 0.72 for sit-to-stand, 0.83 for stand-to-sit, and 0.67 for reach-forward. This performance reaches excellent level of agreement with the average experts’ rating performance suggesting potential for personalized chronic pain management at home. We analyze various parameters characterizing our approach to understand how the results could generalize to other PBD datasets and different levels of ground truth granularity.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2020-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3449068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43298593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable health-tracking consumer products are gaining popularity, including smartwatches, fitness trackers, smart clothing, and head-mounted devices. These wearable devices promise new opportunities for the study of health-related behavior, for tracking of chronic conditions, and for innovative interventions in support of health and wellness. Next-generation wearable technologies have the potential to transform today’s hospitalcentered healthcare practices into proactive, individualized care. Although it seems new technologies enter the marketplace every week, there is still a great need for research on the development of sensors, sensor-data analytics, wearable interaction modalities, and more. In this special issue, we sought to assemble a set of articles addressing novel computational research related to any aspect of the design or use of wearables in medicine and health, including wearable hardware design, AI and data analytics algorithms, human-device interaction, security/privacy, and novel applications. Here, in Part 1 of a two-part collection of articles on this topic, we are pleased to share seven articles about the use of wearables for emotion sensing, physiotherapy, virtual reality, automated meal detection, a human data model, and a survey of physical-activity tracking. In the first article, “EmotionSense: An Adaptive Emotion Recognition System Based on Wearable Smart Devices”, Wang et al. propose an adaptive emotion recognition system based on smartwatches. The proposed approach first identifies user activities and employs an adaptive emotion-recognition method that extracts finegrained features from multi-mode sensory data and characterizes different emotions. This work demonstrates that wearable devices like smartwatches have made it possible to recognize physiological and behavioral patterns of humans in a convenient and non-invasive manner. In the next article, “Physiotherapy over a Distance: The Use of Wearable Technology for Video Consultations in Hospital Settings”, Aggarwal et al. report the findings of a field evaluation of a wearable technology, called SoPhy, in assessment of lower-limb movements in video consultations. The results show a number of advantages of the wearable systems like SoPhy, including helping physiotherapists in identifying subtle differences in the patient’s movements, increasing the diagnostic confidence of the physiotherapists and guiding more accurate assessment of the patients, and enhancing the overall clinician-patient communication in better understanding the therapy goals to the patients. Based on the findings, the article also presents design implications to guide further development of the video-consultation systems. Next, the article “On Shooting Stars: Comparing CAVE and HMD Immersive Virtual Reality Exergaming for Adults with Mixed Ability”, presents a study that explores the effects of two different iVR systems, the Cave Automated Virtual Environment (CAVE) and HTC Vive Head-Mounted Displ
可穿戴式健康追踪消费产品越来越受欢迎,包括智能手表、健身追踪器、智能服装和头戴式设备。这些可穿戴设备为研究与健康相关的行为、跟踪慢性病以及支持健康和保健的创新干预提供了新的机会。下一代可穿戴技术有可能将今天以医院为中心的医疗保健实践转变为主动的个性化护理。尽管似乎每周都有新技术进入市场,但仍然非常需要对传感器、传感器数据分析、可穿戴交互模式等的发展进行研究。在本期特刊中,我们试图收集一组文章,讨论与可穿戴设备在医疗和健康领域的设计或使用的任何方面相关的新颖计算研究,包括可穿戴硬件设计、人工智能和数据分析算法、人机交互、安全/隐私和新颖应用。在本文的第1部分,我们将分享七篇关于可穿戴设备在情感感知、物理治疗、虚拟现实、自动膳食检测、人类数据模型和身体活动跟踪调查方面的应用的文章。在第一篇文章“EmotionSense:基于可穿戴智能设备的自适应情绪识别系统”中,Wang等人提出了一种基于智能手表的自适应情绪识别系统。该方法首先识别用户活动,并采用自适应情绪识别方法,从多模式感官数据中提取细粒度特征,并表征不同的情绪。这项工作表明,像智能手表这样的可穿戴设备已经能够以一种方便和非侵入性的方式识别人类的生理和行为模式。在下一篇文章“远程物理治疗:在医院环境中使用可穿戴技术进行视频会诊”中,Aggarwal等人报告了一种名为SoPhy的可穿戴技术的现场评估结果,该技术用于评估视频会诊中的下肢运动。结果显示,像SoPhy这样的可穿戴系统有许多优势,包括帮助物理治疗师识别患者运动中的细微差异,提高物理治疗师的诊断信心,指导更准确的患者评估,以及加强临床与患者的整体沟通,更好地了解患者的治疗目标。基于研究结果,本文还提出了指导视频咨询系统进一步发展的设计启示。接下来,文章“On Shooting Stars: comparative CAVE and HMD Immersive Virtual Reality Exergaming for Adults with Mixed Ability”,提出了一项研究,探讨了两种不同的iVR系统,CAVE自动化虚拟环境(CAVE)和HTC Vive头戴式显示器(HMD)作为物理治疗系统的效果。利用一种名为Project Star Catcher (PSC)的运动游戏,作者在n=40名受损用户和非受损用户之间进行了交叉检查。结果表明,HMD - iVR系统在提高运动的身体表现和生理反应方面要有效得多
{"title":"Introduction to the Special Issue on the Wearable Technologies for Smart Health","authors":"D. Kotz, G. Xing","doi":"10.1145/3423967","DOIUrl":"https://doi.org/10.1145/3423967","url":null,"abstract":"Wearable health-tracking consumer products are gaining popularity, including smartwatches, fitness trackers, smart clothing, and head-mounted devices. These wearable devices promise new opportunities for the study of health-related behavior, for tracking of chronic conditions, and for innovative interventions in support of health and wellness. Next-generation wearable technologies have the potential to transform today’s hospitalcentered healthcare practices into proactive, individualized care. Although it seems new technologies enter the marketplace every week, there is still a great need for research on the development of sensors, sensor-data analytics, wearable interaction modalities, and more. In this special issue, we sought to assemble a set of articles addressing novel computational research related to any aspect of the design or use of wearables in medicine and health, including wearable hardware design, AI and data analytics algorithms, human-device interaction, security/privacy, and novel applications. Here, in Part 1 of a two-part collection of articles on this topic, we are pleased to share seven articles about the use of wearables for emotion sensing, physiotherapy, virtual reality, automated meal detection, a human data model, and a survey of physical-activity tracking. In the first article, “EmotionSense: An Adaptive Emotion Recognition System Based on Wearable Smart Devices”, Wang et al. propose an adaptive emotion recognition system based on smartwatches. The proposed approach first identifies user activities and employs an adaptive emotion-recognition method that extracts finegrained features from multi-mode sensory data and characterizes different emotions. This work demonstrates that wearable devices like smartwatches have made it possible to recognize physiological and behavioral patterns of humans in a convenient and non-invasive manner. In the next article, “Physiotherapy over a Distance: The Use of Wearable Technology for Video Consultations in Hospital Settings”, Aggarwal et al. report the findings of a field evaluation of a wearable technology, called SoPhy, in assessment of lower-limb movements in video consultations. The results show a number of advantages of the wearable systems like SoPhy, including helping physiotherapists in identifying subtle differences in the patient’s movements, increasing the diagnostic confidence of the physiotherapists and guiding more accurate assessment of the patients, and enhancing the overall clinician-patient communication in better understanding the therapy goals to the patients. Based on the findings, the article also presents design implications to guide further development of the video-consultation systems. Next, the article “On Shooting Stars: Comparing CAVE and HMD Immersive Virtual Reality Exergaming for Adults with Mixed Ability”, presents a study that explores the effects of two different iVR systems, the Cave Automated Virtual Environment (CAVE) and HTC Vive Head-Mounted Displ","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"1 1","pages":"1 - 2"},"PeriodicalIF":0.0,"publicationDate":"2020-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3423967","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47188211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physical activity (PA) positively impacts the quality of life of older adults, with technology as a promising factor in maintaining motivation. Within Computer Science and Engineering, research inv...
{"title":"Wearable Physical Activity Tracking Systems for Older Adults—A Systematic Review","authors":"VargemidisDimitri, GerlingKathrin, SpielKatta, AbeeleVero Vanden, GeurtsLuc","doi":"10.1145/3402523","DOIUrl":"https://doi.org/10.1145/3402523","url":null,"abstract":"Physical activity (PA) positively impacts the quality of life of older adults, with technology as a promising factor in maintaining motivation. Within Computer Science and Engineering, research inv...","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"1 1","pages":"1-37"},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3402523","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64028943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the recent surge of smart wearable devices, it is possible to obtain the physiological and behavioral data of human beings in a more convenient and non-invasive manner. Based on such data, researchers have developed a variety of systems or applications to recognize and understand human behaviors, including both physical activities (e.g., gestures) and mental states (e.g., emotions). Specifically, it has been proved that different emotions can cause different changes in physiological parameters. However, other factors, such as activities, may also impact one’s physiological parameters. To accurately recognize emotions, we need not only explore the physiological data but also the behavioral data. To this end, we propose an adaptive emotion recognition system by exploring a sensor-enriched wearable smart watch. First, an activity identification method is developed to distinguish different activity scenes (e.g., sitting, walking, and running) by using the accelerometer sensor. Based on the identified activity scenes, an adaptive emotion recognition method is proposed by leveraging multi-mode sensory data (including blood volume pulse, electrodermal activity, and skin temperature). Specifically, we extract fine-grained features to characterize different emotions. Finally, the adaptive user emotion recognition model is constructed and verified by experiments. An accuracy of 74.3% for 30 participants demonstrates that the proposed system can recognize human emotions effectively.
{"title":"EmotionSense","authors":"Zhu Wang, Zhiwen Yu, Bobo Zhao, Bin Guo, Chaoxiong Chen, Zhiyong Yu","doi":"10.1145/3384394","DOIUrl":"https://doi.org/10.1145/3384394","url":null,"abstract":"With the recent surge of smart wearable devices, it is possible to obtain the physiological and behavioral data of human beings in a more convenient and non-invasive manner. Based on such data, researchers have developed a variety of systems or applications to recognize and understand human behaviors, including both physical activities (e.g., gestures) and mental states (e.g., emotions). Specifically, it has been proved that different emotions can cause different changes in physiological parameters. However, other factors, such as activities, may also impact one’s physiological parameters. To accurately recognize emotions, we need not only explore the physiological data but also the behavioral data. To this end, we propose an adaptive emotion recognition system by exploring a sensor-enriched wearable smart watch. First, an activity identification method is developed to distinguish different activity scenes (e.g., sitting, walking, and running) by using the accelerometer sensor. Based on the identified activity scenes, an adaptive emotion recognition method is proposed by leveraging multi-mode sensory data (including blood volume pulse, electrodermal activity, and skin temperature). Specifically, we extract fine-grained features to characterize different emotions. Finally, the adaptive user emotion recognition model is constructed and verified by experiments. An accuracy of 74.3% for 30 participants demonstrates that the proposed system can recognize human emotions effectively.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"35 1","pages":"1 - 17"},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83548182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}