首页 > 最新文献

ACM transactions on computing for healthcare最新文献

英文 中文
Privacy-preserving IoT Framework for Activity Recognition in Personal Healthcare Monitoring 个人健康监测中用于活动识别的隐私保护物联网框架
Pub Date : 2020-12-30 DOI: 10.1145/3416947
T. Jourdan, A. Boutet, A. Bahi, Carole Frindel
The increasing popularity of wearable consumer products can play a significant role in the healthcare sector. The recognition of human activities from IoT is an important building block in this context. While the analysis of the generated datastream can have many benefits from a health point of view, it can also lead to privacy threats by exposing highly sensitive information. In this article, we propose a framework that relies on machine learning to efficiently recognise the user activity, useful for personal healthcare monitoring, while limiting the risk of users re-identification from biometric patterns characterizing each individual. To achieve that, we show that features in temporal domain are useful to discriminate user activity while features in frequency domain lead to distinguish the user identity. We then design a novel protection mechanism processing the raw signal on the user’s smartphone to select relevant features for activity recognition and normalise features sensitive to re-identification. These unlinkable features are then transferred to the application server. We extensively evaluate our framework with reference datasets: Results show an accurate activity recognition (87%) while limiting the re-identification rate (33%). This represents a slight decrease of utility (9%) against a large privacy improvement (53%) compared to state-of-the-art baselines.
可穿戴消费产品的日益普及可以在医疗保健领域发挥重要作用。在这种情况下,从物联网中识别人类活动是一个重要的组成部分。虽然从健康的角度来看,对生成的数据流进行分析有很多好处,但它也可能暴露高度敏感的信息,从而带来隐私威胁。在本文中,我们提出了一个框架,该框架依赖于机器学习来有效识别用户活动,这对个人健康监测很有用,同时限制了用户从每个个体特征的生物特征模式中重新识别的风险。为了实现这一目标,我们证明了时域特征有助于区分用户活动,而频域特征有助于区分用户身份。然后,我们设计了一种新的保护机制,处理用户智能手机上的原始信号,选择相关特征进行活动识别,并对对重新识别敏感的特征进行归一化。然后将这些不可链接的特性转移到应用程序服务器。我们用参考数据集广泛评估了我们的框架:结果显示准确的活动识别(87%),同时限制了重新识别率(33%)。这表明与最先进的基线相比,实用性略有下降(9%),隐私性大幅提高(53%)。
{"title":"Privacy-preserving IoT Framework for Activity Recognition in Personal Healthcare Monitoring","authors":"T. Jourdan, A. Boutet, A. Bahi, Carole Frindel","doi":"10.1145/3416947","DOIUrl":"https://doi.org/10.1145/3416947","url":null,"abstract":"The increasing popularity of wearable consumer products can play a significant role in the healthcare sector. The recognition of human activities from IoT is an important building block in this context. While the analysis of the generated datastream can have many benefits from a health point of view, it can also lead to privacy threats by exposing highly sensitive information. In this article, we propose a framework that relies on machine learning to efficiently recognise the user activity, useful for personal healthcare monitoring, while limiting the risk of users re-identification from biometric patterns characterizing each individual. To achieve that, we show that features in temporal domain are useful to discriminate user activity while features in frequency domain lead to distinguish the user identity. We then design a novel protection mechanism processing the raw signal on the user’s smartphone to select relevant features for activity recognition and normalise features sensitive to re-identification. These unlinkable features are then transferred to the application server. We extensively evaluate our framework with reference datasets: Results show an accurate activity recognition (87%) while limiting the re-identification rate (33%). This represents a slight decrease of utility (9%) against a large privacy improvement (53%) compared to state-of-the-art baselines.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3416947","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41965873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Creating and Evaluating Chatbots as Eligibility Assistants for Clinical Trials 创建和评估聊天机器人作为临床试验的合格助手
Pub Date : 2020-12-30 DOI: 10.1145/3403575
C. Chuan, Susan Morgan
Clinical trials are important tools to improve knowledge about the effectiveness of new treatments for all diseases, including cancers. However, studies show that fewer than 5% of cancer patients are enrolled in any type of research study or clinical trial. Although there is a wide variety of reasons for the low participation rate, we address this issue by designing a chatbot to help users determine their eligibility via interactive, two-way communication. The chatbot is supported by a user-centered classifier that uses an active deep learning approach to separate complex eligibility criteria into questions that can be easily answered by users and information that requires verification by their doctors. We collected all the available clinical trial eligibility criteria from the National Cancer Institute's website to evaluate the chatbot and the classifier. Experimental results show that the active deep learning classifier outperforms the baseline k-nearest neighbor method. In addition, an in-person experiment was conducted to evaluate the effectiveness of the chatbot. The results indicate that the participants who used the chatbot achieved better understanding about eligibility than those who used only the website. Furthermore, interfaces with chatbots were rated significantly better in terms of perceived usability, interactivity, and dialogue.
临床试验是提高对包括癌症在内的所有疾病的新疗法有效性的认识的重要工具。然而,研究表明,只有不到5%的癌症患者参加了任何类型的研究或临床试验。虽然参与率低的原因有很多,但我们通过设计一个聊天机器人来解决这个问题,帮助用户通过交互式的双向沟通来确定他们的资格。聊天机器人由以用户为中心的分类器支持,该分类器使用主动深度学习方法将复杂的资格标准分离为用户可以轻松回答的问题和需要医生验证的信息。我们从国家癌症研究所的网站上收集了所有可用的临床试验资格标准来评估聊天机器人和分类器。实验结果表明,主动深度学习分类器优于基线k近邻方法。此外,还进行了面对面的实验来评估聊天机器人的有效性。结果表明,使用聊天机器人的参与者比只使用网站的参与者对资格有更好的理解。此外,与聊天机器人的界面在感知可用性、交互性和对话方面的评分明显更好。
{"title":"Creating and Evaluating Chatbots as Eligibility Assistants for Clinical Trials","authors":"C. Chuan, Susan Morgan","doi":"10.1145/3403575","DOIUrl":"https://doi.org/10.1145/3403575","url":null,"abstract":"Clinical trials are important tools to improve knowledge about the effectiveness of new treatments for all diseases, including cancers. However, studies show that fewer than 5% of cancer patients are enrolled in any type of research study or clinical trial. Although there is a wide variety of reasons for the low participation rate, we address this issue by designing a chatbot to help users determine their eligibility via interactive, two-way communication. The chatbot is supported by a user-centered classifier that uses an active deep learning approach to separate complex eligibility criteria into questions that can be easily answered by users and information that requires verification by their doctors. We collected all the available clinical trial eligibility criteria from the National Cancer Institute's website to evaluate the chatbot and the classifier. Experimental results show that the active deep learning classifier outperforms the baseline k-nearest neighbor method. In addition, an in-person experiment was conducted to evaluate the effectiveness of the chatbot. The results indicate that the participants who used the chatbot achieved better understanding about eligibility than those who used only the website. Furthermore, interfaces with chatbots were rated significantly better in terms of perceived usability, interactivity, and dialogue.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3403575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47567756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Generalized and Efficient Skill Assessment from IMU Data with Applications in Gymnastics and Medical Training 基于IMU数据的广义高效技能评估及其在体操和医学训练中的应用
Pub Date : 2020-12-30 DOI: 10.1145/3422168
Aftab Khan, Sebastian Mellor, R. King, Balazs Janko, W. Harwin, R. Sherratt, I. Craddock, T. Plötz
Human activity recognition is progressing from automatically determining what a person is doing and when, to additionally analyzing the quality of these activities—typically referred to as skill assessment. In this chapter, we propose a new framework for skill assessment that generalizes across application domains and can be deployed for near-real-time applications. It is based on the notion of repeatability of activities defining skill. The analysis is based on two subsequent classification steps that analyze (1) movements or activities and (2) their qualities, that is, the actual skills of a human performing them. The first classifier is trained in either a supervised or unsupervised manner and provides confidence scores, which are then used for assessing skills. We evaluate the proposed method in two scenarios: gymnastics and surgical skill training of medical students. We demonstrate both the overall effectiveness and efficiency of the generalized assessment method, especially compared to previous work.
人类活动识别正在从自动确定一个人在做什么和什么时候,发展到分析这些活动的质量——通常被称为技能评估。在本章中,我们提出了一个新的技能评估框架,它可以跨应用领域进行推广,并可以部署到近实时的应用中。它基于定义技能的活动的可重复性的概念。分析基于两个后续的分类步骤,这两个步骤分析(1)动作或活动和(2)它们的质量,即人类执行这些动作或活动的实际技能。第一个分类器以监督或无监督的方式进行训练,并提供置信度分数,然后用于评估技能。我们在医学生体操和外科技能训练两种情况下评估了所提出的方法。我们证明了广义评估方法的总体有效性和效率,特别是与以前的工作相比。
{"title":"Generalized and Efficient Skill Assessment from IMU Data with Applications in Gymnastics and Medical Training","authors":"Aftab Khan, Sebastian Mellor, R. King, Balazs Janko, W. Harwin, R. Sherratt, I. Craddock, T. Plötz","doi":"10.1145/3422168","DOIUrl":"https://doi.org/10.1145/3422168","url":null,"abstract":"Human activity recognition is progressing from automatically determining what a person is doing and when, to additionally analyzing the quality of these activities—typically referred to as skill assessment. In this chapter, we propose a new framework for skill assessment that generalizes across application domains and can be deployed for near-real-time applications. It is based on the notion of repeatability of activities defining skill. The analysis is based on two subsequent classification steps that analyze (1) movements or activities and (2) their qualities, that is, the actual skills of a human performing them. The first classifier is trained in either a supervised or unsupervised manner and provides confidence scores, which are then used for assessing skills. We evaluate the proposed method in two scenarios: gymnastics and surgical skill training of medical students. We demonstrate both the overall effectiveness and efficiency of the generalized assessment method, especially compared to previous work.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 21"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422168","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46052762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Mobile and Wearable Sensing Frameworks for mHealth Studies and Applications 移动健康研究和应用的移动和可穿戴传感框架
Pub Date : 2020-12-30 DOI: 10.1145/3422158
Devender Kumar, S. Jeuris, J. Bardram, N. Dragoni
With the widespread use of smartphones and wearable health sensors, a plethora of mobile health (mHealth) applications to track well-being, run human behavioral studies, and clinical trials have emerged in recent years. However, the design, development, and deployment of mHealth applications is challenging in many ways. To address these challenges, several generic mobile sensing frameworks have been researched in the past decade. Such frameworks assist developers and researchers in reducing the complexity, time, and cost required to build and deploy health-sensing applications. The main goal of this article is to provide the reader with an overview of the state-of-the-art of health-focused generic mobile and wearable sensing frameworks. This review gives a detailed analysis of functional and non-functional features of existing frameworks, the health studies they were used in, and the stakeholders they support. Additionally, we also analyze the historical evolution, uptake, and maintenance after the initial release. Based on this analysis, we suggest new features and opportunities for future generic mHealth sensing frameworks.
随着智能手机和可穿戴式健康传感器的广泛使用,近年来出现了大量跟踪健康状况、进行人类行为研究和临床试验的移动健康(mHealth)应用程序。然而,移动健康应用程序的设计、开发和部署在许多方面都具有挑战性。为了应对这些挑战,在过去十年中研究了几种通用的移动传感框架。这些框架帮助开发人员和研究人员减少构建和部署健康感知应用程序所需的复杂性、时间和成本。本文的主要目标是向读者提供以健康为重点的通用移动和可穿戴传感框架的最新技术概述。本综述详细分析了现有框架的功能和非功能特征、它们所用于的健康研究以及它们所支持的利益相关者。此外,我们还分析了初始版本发布后的历史演变、吸收和维护。基于这一分析,我们提出了未来通用移动健康传感框架的新功能和机会。
{"title":"Mobile and Wearable Sensing Frameworks for mHealth Studies and Applications","authors":"Devender Kumar, S. Jeuris, J. Bardram, N. Dragoni","doi":"10.1145/3422158","DOIUrl":"https://doi.org/10.1145/3422158","url":null,"abstract":"With the widespread use of smartphones and wearable health sensors, a plethora of mobile health (mHealth) applications to track well-being, run human behavioral studies, and clinical trials have emerged in recent years. However, the design, development, and deployment of mHealth applications is challenging in many ways. To address these challenges, several generic mobile sensing frameworks have been researched in the past decade. Such frameworks assist developers and researchers in reducing the complexity, time, and cost required to build and deploy health-sensing applications. The main goal of this article is to provide the reader with an overview of the state-of-the-art of health-focused generic mobile and wearable sensing frameworks. This review gives a detailed analysis of functional and non-functional features of existing frameworks, the health studies they were used in, and the stakeholders they support. Additionally, we also analyze the historical evolution, uptake, and maintenance after the initial release. Based on this analysis, we suggest new features and opportunities for future generic mHealth sensing frameworks.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422158","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47998709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Designing Visual Markers for Continuous Artificial Intelligence Support 设计持续人工智能支持的视觉标记
Pub Date : 2020-12-30 DOI: 10.1145/3422156
Niels van Berkel, O. Ahmad, D. Stoyanov, L. Lovat, A. Blandford
Colonoscopy, the visual inspection of the large bowel using an endoscope, offers protection against colorectal cancer by allowing for the detection and removal of pre-cancerous polyps. The literature on polyp detection shows widely varying miss rates among clinicians, with averages ranging around 22%--27%. While recent work has considered the use of AI support systems for polyp detection, how to visualise and integrate these systems into clinical practice is an open question. In this work, we explore the design of visual markers as used in an AI support system for colonoscopy. Supported by the gastroenterologists in our team, we designed seven unique visual markers and rendered them on real-life patient video footage. Through an online survey targeting relevant clinical staff (N = 36), we evaluated these designs and obtained initial insights and understanding into the way in which clinical staff envision AI to integrate in their daily work-environment. Our results provide concrete recommendations for the future deployment of AI support systems in continuous, adaptive scenarios.
结肠镜检查是使用内窥镜对大肠进行目视检查,通过检测和切除癌前息肉,可以预防癌症。关于息肉检测的文献显示,临床医生的漏诊率差异很大,平均在22%-27%之间。虽然最近的工作考虑了将人工智能支持系统用于息肉检测,但如何将这些系统可视化并集成到临床实践中是一个悬而未决的问题。在这项工作中,我们探索了结肠镜检查人工智能支持系统中使用的视觉标记的设计。在我们团队胃肠病学家的支持下,我们设计了七个独特的视觉标记,并将其呈现在真实的患者视频片段中。通过一项针对相关临床工作人员(N=36)的在线调查,我们评估了这些设计,并对临床工作人员设想人工智能融入日常工作环境的方式获得了初步的见解和理解。我们的研究结果为未来在连续、适应性场景中部署人工智能支持系统提供了具体建议。
{"title":"Designing Visual Markers for Continuous Artificial Intelligence Support","authors":"Niels van Berkel, O. Ahmad, D. Stoyanov, L. Lovat, A. Blandford","doi":"10.1145/3422156","DOIUrl":"https://doi.org/10.1145/3422156","url":null,"abstract":"Colonoscopy, the visual inspection of the large bowel using an endoscope, offers protection against colorectal cancer by allowing for the detection and removal of pre-cancerous polyps. The literature on polyp detection shows widely varying miss rates among clinicians, with averages ranging around 22%--27%. While recent work has considered the use of AI support systems for polyp detection, how to visualise and integrate these systems into clinical practice is an open question. In this work, we explore the design of visual markers as used in an AI support system for colonoscopy. Supported by the gastroenterologists in our team, we designed seven unique visual markers and rendered them on real-life patient video footage. Through an online survey targeting relevant clinical staff (N = 36), we evaluated these designs and obtained initial insights and understanding into the way in which clinical staff envision AI to integrate in their daily work-environment. Our results provide concrete recommendations for the future deployment of AI support systems in continuous, adaptive scenarios.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":" ","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2020-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3422156","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44225886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Transfer Learning for Human Activity Recognition Using Representational Analysis of Neural Networks 基于神经网络表征分析的人类活动识别迁移学习
Pub Date : 2020-12-05 DOI: 10.1145/3563948
Sizhe An, Ganapati Bhat, S. Gumussoy, Ümit Y. Ogras
Human activity recognition (HAR) has increased in recent years due to its applications in mobile health monitoring, activity recognition, and patient rehabilitation. The typical approach is training a HAR classifier offline with known users and then using the same classifier for new users. However, the accuracy for new users can be low with this approach if their activity patterns are different than those in the training data. At the same time, training from scratch for new users is not feasible for mobile applications due to the high computational cost and training time. To address this issue, we propose a HAR transfer learning framework with two components. First, a representational analysis reveals common features that can transfer across users and user-specific features that need to be customized. Using this insight, we transfer the reusable portion of the offline classifier to new users and fine-tune only the rest. Our experiments with five datasets show up to 43% accuracy improvement and 66% training time reduction when compared to the baseline without using transfer learning. Furthermore, measurements on the hardware platform reveal that the power and energy consumption decreased by 43% and 68%, respectively, while achieving the same or higher accuracy as training from scratch. Our code is released for reproducibility.1
近年来,人类活动识别(HAR)由于其在移动健康监测、活动识别和患者康复方面的应用而有所增加。典型的方法是与已知用户离线训练HAR分类器,然后对新用户使用相同的分类器。然而,如果新用户的活动模式与训练数据中的活动模式不同,那么使用这种方法的准确性可能会很低。同时,由于高昂的计算成本和训练时间,为新用户从头开始训练对于移动应用程序来说是不可行的。为了解决这个问题,我们提出了一个由两个部分组成的HAR迁移学习框架。首先,代表性分析揭示了可以在用户之间传递的常见特征和需要定制的用户特定特征。利用这一见解,我们将离线分类器的可重用部分转移给新用户,并仅对其余部分进行微调。我们对五个数据集的实验表明,与不使用迁移学习的基线相比,准确率提高了43%,训练时间减少了66%。此外,在硬件平台上的测量表明,功耗和能耗分别降低了43%和68%,同时实现了与从头开始训练相同或更高的精度。我们的代码是为了再现性而发布的。1
{"title":"Transfer Learning for Human Activity Recognition Using Representational Analysis of Neural Networks","authors":"Sizhe An, Ganapati Bhat, S. Gumussoy, Ümit Y. Ogras","doi":"10.1145/3563948","DOIUrl":"https://doi.org/10.1145/3563948","url":null,"abstract":"Human activity recognition (HAR) has increased in recent years due to its applications in mobile health monitoring, activity recognition, and patient rehabilitation. The typical approach is training a HAR classifier offline with known users and then using the same classifier for new users. However, the accuracy for new users can be low with this approach if their activity patterns are different than those in the training data. At the same time, training from scratch for new users is not feasible for mobile applications due to the high computational cost and training time. To address this issue, we propose a HAR transfer learning framework with two components. First, a representational analysis reveals common features that can transfer across users and user-specific features that need to be customized. Using this insight, we transfer the reusable portion of the offline classifier to new users and fine-tune only the rest. Our experiments with five datasets show up to 43% accuracy improvement and 66% training time reduction when compared to the baseline without using transfer learning. Furthermore, measurements on the hardware platform reveal that the power and energy consumption decreased by 43% and 68%, respectively, while achieving the same or higher accuracy as training from scratch. Our code is released for reproducibility.1","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"4 1","pages":"1 - 21"},"PeriodicalIF":0.0,"publicationDate":"2020-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49468295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Chronic Pain Protective Behavior Detection with Deep Learning 基于深度学习的慢性疼痛保护行为检测
Pub Date : 2020-11-29 DOI: 10.1145/3449068
Chongyang Wang, Temitayo A. Olugbade, Akhil Mathur, A. Williams, N. Lane, N. Bianchi-Berthouze
In chronic pain rehabilitation, physiotherapists adapt physical activity to patients’ performance based on their expression of protective behavior, gradually exposing them to feared but harmless and essential everyday activities. As rehabilitation moves outside the clinic, technology should automatically detect such behavior to provide similar support. Previous works have shown the feasibility of automatic protective behavior detection (PBD) within a specific activity. In this article, we investigate the use of deep learning for PBD across activity types, using wearable motion capture and surface electromyography data collected from healthy participants and people with chronic pain. We approach the problem by continuously detecting protective behavior within an activity rather than estimating its overall presence. The best performance reaches mean F1 score of 0.82 with leave-one-subject-out cross validation. When protective behavior is modeled per activity type, performance achieves a mean F1 score of 0.77 for bend-down, 0.81 for one-leg-stand, 0.72 for sit-to-stand, 0.83 for stand-to-sit, and 0.67 for reach-forward. This performance reaches excellent level of agreement with the average experts’ rating performance suggesting potential for personalized chronic pain management at home. We analyze various parameters characterizing our approach to understand how the results could generalize to other PBD datasets and different levels of ground truth granularity.
在慢性疼痛康复中,理疗师根据患者保护行为的表现,使身体活动适应患者的表现,逐渐使他们接触到令人恐惧但无害的基本日常活动。随着康复活动在诊所之外进行,技术应该自动检测这种行为,以提供类似的支持。先前的工作已经表明了在特定活动中自动保护行为检测(PBD)的可行性。在这篇文章中,我们使用从健康参与者和慢性疼痛患者那里收集的可穿戴运动捕捉和表面肌电图数据,研究了深度学习在不同活动类型的PBD中的应用。我们通过不断检测活动中的保护行为来解决这个问题,而不是估计其整体存在。最好的表现达到了0.82的平均F1分数,去掉了一个受试者的交叉验证。当按照活动类型对保护行为进行建模时,表现的平均F1得分为:弯腰0.77分,单腿站立0.81分,坐到站0.72分,站到坐0.83分,向前伸展0.67分。这一表现与专家的平均评分表现达到了极好的一致性,表明在家进行个性化慢性疼痛管理的潜力。我们分析了表征我们方法的各种参数,以了解结果如何推广到其他PBD数据集和不同级别的基本事实粒度。
{"title":"Chronic Pain Protective Behavior Detection with Deep Learning","authors":"Chongyang Wang, Temitayo A. Olugbade, Akhil Mathur, A. Williams, N. Lane, N. Bianchi-Berthouze","doi":"10.1145/3449068","DOIUrl":"https://doi.org/10.1145/3449068","url":null,"abstract":"In chronic pain rehabilitation, physiotherapists adapt physical activity to patients’ performance based on their expression of protective behavior, gradually exposing them to feared but harmless and essential everyday activities. As rehabilitation moves outside the clinic, technology should automatically detect such behavior to provide similar support. Previous works have shown the feasibility of automatic protective behavior detection (PBD) within a specific activity. In this article, we investigate the use of deep learning for PBD across activity types, using wearable motion capture and surface electromyography data collected from healthy participants and people with chronic pain. We approach the problem by continuously detecting protective behavior within an activity rather than estimating its overall presence. The best performance reaches mean F1 score of 0.82 with leave-one-subject-out cross validation. When protective behavior is modeled per activity type, performance achieves a mean F1 score of 0.77 for bend-down, 0.81 for one-leg-stand, 0.72 for sit-to-stand, 0.83 for stand-to-sit, and 0.67 for reach-forward. This performance reaches excellent level of agreement with the average experts’ rating performance suggesting potential for personalized chronic pain management at home. We analyze various parameters characterizing our approach to understand how the results could generalize to other PBD datasets and different levels of ground truth granularity.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"2 1","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2020-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3449068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43298593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Introduction to the Special Issue on the Wearable Technologies for Smart Health 智能健康可穿戴技术特刊简介
Pub Date : 2020-11-23 DOI: 10.1145/3423967
D. Kotz, G. Xing
Wearable health-tracking consumer products are gaining popularity, including smartwatches, fitness trackers, smart clothing, and head-mounted devices. These wearable devices promise new opportunities for the study of health-related behavior, for tracking of chronic conditions, and for innovative interventions in support of health and wellness. Next-generation wearable technologies have the potential to transform today’s hospitalcentered healthcare practices into proactive, individualized care. Although it seems new technologies enter the marketplace every week, there is still a great need for research on the development of sensors, sensor-data analytics, wearable interaction modalities, and more. In this special issue, we sought to assemble a set of articles addressing novel computational research related to any aspect of the design or use of wearables in medicine and health, including wearable hardware design, AI and data analytics algorithms, human-device interaction, security/privacy, and novel applications. Here, in Part 1 of a two-part collection of articles on this topic, we are pleased to share seven articles about the use of wearables for emotion sensing, physiotherapy, virtual reality, automated meal detection, a human data model, and a survey of physical-activity tracking. In the first article, “EmotionSense: An Adaptive Emotion Recognition System Based on Wearable Smart Devices”, Wang et al. propose an adaptive emotion recognition system based on smartwatches. The proposed approach first identifies user activities and employs an adaptive emotion-recognition method that extracts finegrained features from multi-mode sensory data and characterizes different emotions. This work demonstrates that wearable devices like smartwatches have made it possible to recognize physiological and behavioral patterns of humans in a convenient and non-invasive manner. In the next article, “Physiotherapy over a Distance: The Use of Wearable Technology for Video Consultations in Hospital Settings”, Aggarwal et al. report the findings of a field evaluation of a wearable technology, called SoPhy, in assessment of lower-limb movements in video consultations. The results show a number of advantages of the wearable systems like SoPhy, including helping physiotherapists in identifying subtle differences in the patient’s movements, increasing the diagnostic confidence of the physiotherapists and guiding more accurate assessment of the patients, and enhancing the overall clinician-patient communication in better understanding the therapy goals to the patients. Based on the findings, the article also presents design implications to guide further development of the video-consultation systems. Next, the article “On Shooting Stars: Comparing CAVE and HMD Immersive Virtual Reality Exergaming for Adults with Mixed Ability”, presents a study that explores the effects of two different iVR systems, the Cave Automated Virtual Environment (CAVE) and HTC Vive Head-Mounted Displ
可穿戴式健康追踪消费产品越来越受欢迎,包括智能手表、健身追踪器、智能服装和头戴式设备。这些可穿戴设备为研究与健康相关的行为、跟踪慢性病以及支持健康和保健的创新干预提供了新的机会。下一代可穿戴技术有可能将今天以医院为中心的医疗保健实践转变为主动的个性化护理。尽管似乎每周都有新技术进入市场,但仍然非常需要对传感器、传感器数据分析、可穿戴交互模式等的发展进行研究。在本期特刊中,我们试图收集一组文章,讨论与可穿戴设备在医疗和健康领域的设计或使用的任何方面相关的新颖计算研究,包括可穿戴硬件设计、人工智能和数据分析算法、人机交互、安全/隐私和新颖应用。在本文的第1部分,我们将分享七篇关于可穿戴设备在情感感知、物理治疗、虚拟现实、自动膳食检测、人类数据模型和身体活动跟踪调查方面的应用的文章。在第一篇文章“EmotionSense:基于可穿戴智能设备的自适应情绪识别系统”中,Wang等人提出了一种基于智能手表的自适应情绪识别系统。该方法首先识别用户活动,并采用自适应情绪识别方法,从多模式感官数据中提取细粒度特征,并表征不同的情绪。这项工作表明,像智能手表这样的可穿戴设备已经能够以一种方便和非侵入性的方式识别人类的生理和行为模式。在下一篇文章“远程物理治疗:在医院环境中使用可穿戴技术进行视频会诊”中,Aggarwal等人报告了一种名为SoPhy的可穿戴技术的现场评估结果,该技术用于评估视频会诊中的下肢运动。结果显示,像SoPhy这样的可穿戴系统有许多优势,包括帮助物理治疗师识别患者运动中的细微差异,提高物理治疗师的诊断信心,指导更准确的患者评估,以及加强临床与患者的整体沟通,更好地了解患者的治疗目标。基于研究结果,本文还提出了指导视频咨询系统进一步发展的设计启示。接下来,文章“On Shooting Stars: comparative CAVE and HMD Immersive Virtual Reality Exergaming for Adults with Mixed Ability”,提出了一项研究,探讨了两种不同的iVR系统,CAVE自动化虚拟环境(CAVE)和HTC Vive头戴式显示器(HMD)作为物理治疗系统的效果。利用一种名为Project Star Catcher (PSC)的运动游戏,作者在n=40名受损用户和非受损用户之间进行了交叉检查。结果表明,HMD - iVR系统在提高运动的身体表现和生理反应方面要有效得多
{"title":"Introduction to the Special Issue on the Wearable Technologies for Smart Health","authors":"D. Kotz, G. Xing","doi":"10.1145/3423967","DOIUrl":"https://doi.org/10.1145/3423967","url":null,"abstract":"Wearable health-tracking consumer products are gaining popularity, including smartwatches, fitness trackers, smart clothing, and head-mounted devices. These wearable devices promise new opportunities for the study of health-related behavior, for tracking of chronic conditions, and for innovative interventions in support of health and wellness. Next-generation wearable technologies have the potential to transform today’s hospitalcentered healthcare practices into proactive, individualized care. Although it seems new technologies enter the marketplace every week, there is still a great need for research on the development of sensors, sensor-data analytics, wearable interaction modalities, and more. In this special issue, we sought to assemble a set of articles addressing novel computational research related to any aspect of the design or use of wearables in medicine and health, including wearable hardware design, AI and data analytics algorithms, human-device interaction, security/privacy, and novel applications. Here, in Part 1 of a two-part collection of articles on this topic, we are pleased to share seven articles about the use of wearables for emotion sensing, physiotherapy, virtual reality, automated meal detection, a human data model, and a survey of physical-activity tracking. In the first article, “EmotionSense: An Adaptive Emotion Recognition System Based on Wearable Smart Devices”, Wang et al. propose an adaptive emotion recognition system based on smartwatches. The proposed approach first identifies user activities and employs an adaptive emotion-recognition method that extracts finegrained features from multi-mode sensory data and characterizes different emotions. This work demonstrates that wearable devices like smartwatches have made it possible to recognize physiological and behavioral patterns of humans in a convenient and non-invasive manner. In the next article, “Physiotherapy over a Distance: The Use of Wearable Technology for Video Consultations in Hospital Settings”, Aggarwal et al. report the findings of a field evaluation of a wearable technology, called SoPhy, in assessment of lower-limb movements in video consultations. The results show a number of advantages of the wearable systems like SoPhy, including helping physiotherapists in identifying subtle differences in the patient’s movements, increasing the diagnostic confidence of the physiotherapists and guiding more accurate assessment of the patients, and enhancing the overall clinician-patient communication in better understanding the therapy goals to the patients. Based on the findings, the article also presents design implications to guide further development of the video-consultation systems. Next, the article “On Shooting Stars: Comparing CAVE and HMD Immersive Virtual Reality Exergaming for Adults with Mixed Ability”, presents a study that explores the effects of two different iVR systems, the Cave Automated Virtual Environment (CAVE) and HTC Vive Head-Mounted Displ","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"1 1","pages":"1 - 2"},"PeriodicalIF":0.0,"publicationDate":"2020-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3423967","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47188211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wearable Physical Activity Tracking Systems for Older Adults—A Systematic Review 老年人可穿戴身体活动跟踪系统综述
Pub Date : 2020-09-30 DOI: 10.1145/3402523
VargemidisDimitri, GerlingKathrin, SpielKatta, AbeeleVero Vanden, GeurtsLuc
Physical activity (PA) positively impacts the quality of life of older adults, with technology as a promising factor in maintaining motivation. Within Computer Science and Engineering, research inv...
体育活动(PA)对老年人的生活质量有积极影响,技术是保持动力的一个有希望的因素。在计算机科学与工程领域,
{"title":"Wearable Physical Activity Tracking Systems for Older Adults—A Systematic Review","authors":"VargemidisDimitri, GerlingKathrin, SpielKatta, AbeeleVero Vanden, GeurtsLuc","doi":"10.1145/3402523","DOIUrl":"https://doi.org/10.1145/3402523","url":null,"abstract":"Physical activity (PA) positively impacts the quality of life of older adults, with technology as a promising factor in maintaining motivation. Within Computer Science and Engineering, research inv...","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"1 1","pages":"1-37"},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3402523","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64028943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
EmotionSense
Pub Date : 2020-09-30 DOI: 10.1145/3384394
Zhu Wang, Zhiwen Yu, Bobo Zhao, Bin Guo, Chaoxiong Chen, Zhiyong Yu
With the recent surge of smart wearable devices, it is possible to obtain the physiological and behavioral data of human beings in a more convenient and non-invasive manner. Based on such data, researchers have developed a variety of systems or applications to recognize and understand human behaviors, including both physical activities (e.g., gestures) and mental states (e.g., emotions). Specifically, it has been proved that different emotions can cause different changes in physiological parameters. However, other factors, such as activities, may also impact one’s physiological parameters. To accurately recognize emotions, we need not only explore the physiological data but also the behavioral data. To this end, we propose an adaptive emotion recognition system by exploring a sensor-enriched wearable smart watch. First, an activity identification method is developed to distinguish different activity scenes (e.g., sitting, walking, and running) by using the accelerometer sensor. Based on the identified activity scenes, an adaptive emotion recognition method is proposed by leveraging multi-mode sensory data (including blood volume pulse, electrodermal activity, and skin temperature). Specifically, we extract fine-grained features to characterize different emotions. Finally, the adaptive user emotion recognition model is constructed and verified by experiments. An accuracy of 74.3% for 30 participants demonstrates that the proposed system can recognize human emotions effectively.
随着近年来智能可穿戴设备的兴起,人们可以更方便、无创地获取人体的生理和行为数据。基于这些数据,研究人员开发了各种系统或应用程序来识别和理解人类行为,包括身体活动(如手势)和精神状态(如情绪)。具体来说,已经证明不同的情绪会引起不同的生理参数变化。然而,其他因素,如活动,也可能影响一个人的生理参数。为了准确地识别情绪,我们不仅需要探索生理数据,还需要探索行为数据。为此,我们通过探索一种传感器丰富的可穿戴智能手表,提出了一种自适应情绪识别系统。首先,开发了一种活动识别方法,利用加速度计传感器区分不同的活动场景(如坐、走、跑)。基于识别出的活动场景,提出了一种利用多模式感知数据(包括血容量脉搏、皮肤电活动和皮肤温度)的自适应情绪识别方法。具体来说,我们提取细粒度特征来表征不同的情绪。最后,构建了自适应用户情感识别模型,并进行了实验验证。30名参与者的准确率为74.3%,表明该系统可以有效识别人类情绪。
{"title":"EmotionSense","authors":"Zhu Wang, Zhiwen Yu, Bobo Zhao, Bin Guo, Chaoxiong Chen, Zhiyong Yu","doi":"10.1145/3384394","DOIUrl":"https://doi.org/10.1145/3384394","url":null,"abstract":"With the recent surge of smart wearable devices, it is possible to obtain the physiological and behavioral data of human beings in a more convenient and non-invasive manner. Based on such data, researchers have developed a variety of systems or applications to recognize and understand human behaviors, including both physical activities (e.g., gestures) and mental states (e.g., emotions). Specifically, it has been proved that different emotions can cause different changes in physiological parameters. However, other factors, such as activities, may also impact one’s physiological parameters. To accurately recognize emotions, we need not only explore the physiological data but also the behavioral data. To this end, we propose an adaptive emotion recognition system by exploring a sensor-enriched wearable smart watch. First, an activity identification method is developed to distinguish different activity scenes (e.g., sitting, walking, and running) by using the accelerometer sensor. Based on the identified activity scenes, an adaptive emotion recognition method is proposed by leveraging multi-mode sensory data (including blood volume pulse, electrodermal activity, and skin temperature). Specifically, we extract fine-grained features to characterize different emotions. Finally, the adaptive user emotion recognition model is constructed and verified by experiments. An accuracy of 74.3% for 30 participants demonstrates that the proposed system can recognize human emotions effectively.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"35 1","pages":"1 - 17"},"PeriodicalIF":0.0,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83548182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
ACM transactions on computing for healthcare
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1