首页 > 最新文献

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

英文 中文
A Context-Assisted, Semi-Automated Activity Recall Interface Allowing Uncertainty. 允许不确定性的上下文辅助半自动活动回忆界面。
IF 4.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 Epub Date: 2025-12-02 DOI: 10.1145/3770710
H A LE, Veronika Potter, Akshat Choube, Rithika Lakshminarayanan, Varun Mishra, Stephen Intille

Measuring activities and postures is an important area of research in ubiquitous computing, human-computer interaction, and personal health informatics. One approach that researchers use to collect large amounts of labeled data to develop models for activity recognition and measurement is asking participants to self-report their daily activities. Although participants can typically recall their sequence of daily activities, remembering the precise start and end times of each activity is significantly more challenging. ACAI is a novel, context-assisted ACtivity Annotation Interface that enables participants to efficiently label their activities by accepting or adjusting system-generated activity suggestions while explicitly expressing uncertainty about temporal boundaries. We evaluated ACAI using two complementary studies: a usability study with 11 participants and a two-week, free-living study with 14 participants. We compared our activity annotation system with the current gold-standard methods for activity recall in health sciences research: 24PAR and its computerized version, ACT24. Our system reduced annotation time and perceived effort while significantly improving data validity and fidelity compared to both standard human-supervised and unsupervised activity recall approaches. We discuss the limitations of our design and implications for developing adaptive, human-in-the-loop activity recognition systems used to collect self-report data on activity.

测量活动和姿势是普适计算、人机交互和个人健康信息学研究的一个重要领域。研究人员用来收集大量标记数据以开发活动识别和测量模型的一种方法是要求参与者自我报告他们的日常活动。虽然参与者通常可以回忆起他们每天活动的顺序,但记住每项活动的准确开始和结束时间显然更具挑战性。ACAI是一种新颖的、上下文辅助的活动注释接口,它使参与者能够通过接受或调整系统生成的活动建议来有效地标记他们的活动,同时显式地表达时间边界的不确定性。我们使用两项互补研究来评估ACAI:一项有11名参与者的可用性研究和一项有14名参与者的为期两周的自由生活研究。我们将我们的活动注释系统与目前健康科学研究中活动召回的金标准方法24PAR及其计算机化版本ACT24进行了比较。与标准的人类监督和非监督活动召回方法相比,我们的系统减少了注释时间和感知努力,同时显着提高了数据的有效性和保真度。我们讨论了我们设计的局限性,以及开发用于收集活动自我报告数据的适应性人在环活动识别系统的意义。
{"title":"A Context-Assisted, Semi-Automated Activity Recall Interface Allowing Uncertainty.","authors":"H A LE, Veronika Potter, Akshat Choube, Rithika Lakshminarayanan, Varun Mishra, Stephen Intille","doi":"10.1145/3770710","DOIUrl":"10.1145/3770710","url":null,"abstract":"<p><p>Measuring activities and postures is an important area of research in ubiquitous computing, human-computer interaction, and personal health informatics. One approach that researchers use to collect large amounts of labeled data to develop models for activity recognition and measurement is asking participants to self-report their daily activities. Although participants can typically recall their sequence of daily activities, remembering the precise start and end times of each activity is significantly more challenging. ACAI is a novel, context-assisted <b>AC</b>tivity <b>A</b>nnotation <b>I</b>nterface that enables participants to efficiently label their activities by accepting or adjusting system-generated activity suggestions while explicitly expressing uncertainty about temporal boundaries. We evaluated ACAI using two complementary studies: a usability study with 11 participants and a two-week, free-living study with 14 participants. We compared our activity annotation system with the current gold-standard methods for activity recall in health sciences research: 24PAR and its computerized version, ACT24. Our system reduced annotation time and perceived effort while significantly improving data validity and fidelity compared to both standard human-supervised and unsupervised activity recall approaches. We discuss the limitations of our design and implications for developing adaptive, human-in-the-loop activity recognition systems used to collect self-report data on activity.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"9 4","pages":""},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12758905/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Sensing and Modeling of Endocrine Therapy Adherence in Breast Cancer Survivors. 乳腺癌幸存者内分泌治疗依从性的多模态感知和建模。
IF 4.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-01 Epub Date: 2025-12-02 DOI: 10.1145/3770864
Fangxu Yuan, Navreet Kaur, Zhiyuan Wang, Manuel Gonzales, Cristian Garcia Alcaraz, Gabriel Estrella, Kristen J Wells, Laura E Barnes

Many breast cancer survivors are prescribed daily oral medications called endocrine therapy that prevent cancer recurrence. Despite its clinical importance, maintaining consistent daily adherence remains challenging due to the dynamic and interrelated influences of behavioral, physiological, and psychological factors. While prior studies have explored adherence prediction using mobile sensing, they often rely on single-modality data, limited temporal granularity, or aggregate-level modeling-limiting their ability to capture short and long-term behavioral variability and to facilitate deeper understanding of non-adherence and tailored interventions. To address these gaps, we propose a multimodal sensing framework that explicitly models daily adherence dynamics using temporally adaptive inputs. We recruited a sample of breast cancer survivors (N = 20) and collected longitudinal data streams including wearable-derived physiological features (Fitbit), medication event monitoring system (MEMS) data, and ecological momentary assessments (EMAs). Using multimodal data across varying time windows, we examined whether recent patterns in behavioral, physiological, psychological, and environmental factors improve the prediction of next-day endocrine therapy adherence. Our results demonstrate the feasibility of using multimodal sensing data to predict daily adherence with moderate accuracy. Moreover, models integrating multimodal data consistently outperformed those relying on a single modality. Importantly, we observed that the predictive value of each modality varied depending on the temporal proximity of the input signals, underscoring the importance of modeling immediate and longer-term behavioral patterns. The findings offer valuable insights for advancing adherence monitoring systems, suggesting that incorporating personalized and temporally adaptive data fusion strategies may significantly enhance the effectiveness of intervention design and delivery.

许多乳腺癌幸存者每天都要服用被称为内分泌疗法的口服药物,以防止癌症复发。尽管具有临床重要性,但由于行为、生理和心理因素的动态和相互关联的影响,保持一致的日常依从性仍然具有挑战性。虽然之前的研究已经探索了使用移动传感的依从性预测,但它们通常依赖于单一模态数据,有限的时间粒度或聚合级建模,限制了它们捕捉短期和长期行为变异性的能力,并有助于更深入地了解不依从性和定制干预措施。为了解决这些差距,我们提出了一个多模态传感框架,该框架使用时间自适应输入明确地模拟日常依从性动态。我们招募了20名乳腺癌幸存者,并收集了纵向数据流,包括可穿戴生理特征(Fitbit)、药物事件监测系统(MEMS)数据和生态瞬间评估(EMAs)。使用跨不同时间窗口的多模式数据,我们检查了行为、生理、心理和环境因素的近期模式是否能改善对第二天内分泌治疗依从性的预测。我们的结果证明了使用多模态传感数据以中等精度预测每日依从性的可行性。此外,集成多模态数据的模型始终优于依赖单一模态的模型。重要的是,我们观察到每种模式的预测值根据输入信号的时间接近度而变化,强调了对即时和长期行为模式建模的重要性。研究结果为推进依从性监测系统提供了有价值的见解,表明结合个性化和时间适应性数据融合策略可以显著提高干预设计和交付的有效性。
{"title":"Multimodal Sensing and Modeling of Endocrine Therapy Adherence in Breast Cancer Survivors.","authors":"Fangxu Yuan, Navreet Kaur, Zhiyuan Wang, Manuel Gonzales, Cristian Garcia Alcaraz, Gabriel Estrella, Kristen J Wells, Laura E Barnes","doi":"10.1145/3770864","DOIUrl":"10.1145/3770864","url":null,"abstract":"<p><p>Many breast cancer survivors are prescribed daily oral medications called endocrine therapy that prevent cancer recurrence. Despite its clinical importance, maintaining consistent daily adherence remains challenging due to the dynamic and interrelated influences of behavioral, physiological, and psychological factors. While prior studies have explored adherence prediction using mobile sensing, they often rely on single-modality data, limited temporal granularity, or aggregate-level modeling-limiting their ability to capture short and long-term behavioral variability and to facilitate deeper understanding of non-adherence and tailored interventions. To address these gaps, we propose a multimodal sensing framework that explicitly models daily adherence dynamics using temporally adaptive inputs. We recruited a sample of breast cancer survivors (<i>N</i> = 20) and collected longitudinal data streams including wearable-derived physiological features (Fitbit), medication event monitoring system (MEMS) data, and ecological momentary assessments (EMAs). Using multimodal data across varying time windows, we examined whether recent patterns in behavioral, physiological, psychological, and environmental factors improve the prediction of next-day endocrine therapy adherence. Our results demonstrate the feasibility of using multimodal sensing data to predict daily adherence with moderate accuracy. Moreover, models integrating multimodal data consistently outperformed those relying on a single modality. Importantly, we observed that the predictive value of each modality varied depending on the temporal proximity of the input signals, underscoring the importance of modeling immediate and longer-term behavioral patterns. The findings offer valuable insights for advancing adherence monitoring systems, suggesting that incorporating personalized and temporally adaptive data fusion strategies may significantly enhance the effectiveness of intervention design and delivery.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"9 4","pages":""},"PeriodicalIF":4.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12711140/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145782251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal User Engagement with Microinteraction Ecological Momentary Assessment (μEMA). 纵向用户参与微交互生态瞬时评估(μEMA)。
IF 4.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-01 Epub Date: 2025-09-03 DOI: 10.1145/3749541
Aditya Ponnada, Shirlene D Wang, Jixin Li, Wei-Lin Wang, Genevieve F Dunton, Donald Hedeker, Stephen S Intille

Microinteraction ecological momentary assessment (μEMA) is a type of EMA that uses single-question prompts on a smartwatch to collect real-world self-reports. Smaller-scale studies show that μEMA yields higher response rates than EMA for up to 4 weeks. In this paper, we evaluated μEMA's longitudinal engagement in a 12-month study. Each participant completed EMA surveys (one smartphone prompt/hour for 96 days in 4-day bursts) and μEMA surveys (four smartwatch prompts/hour for the 270 days). Using data from 177 participants ( 1.37 million μEMA and 14.9K EMA surveys), we compared engagement across three groups: those who completed 12 months of EMA data collection(Completed), those who voluntarily withdrew after six months of EMA data collection (Withdrew), and those unenrolled by staff after six months of poor EMA response rates (Unenrolled). Compared to EMA, unenrolled participants were 2.25 times, those who withdrew were 1.65 times, and completed participants were 1.53 times more likely to answer μEMA prompts (p < 0.001). Regardless of response rates, μEMA was perceived as less burdensome than EMA (p < 0.001). These results suggest μEMA is a viable method for intensive longitudinal data collection, particularly for participants who find EMA unsustainable.

微互动生态瞬间评估(μEMA)是一种利用智能手表上的单一问题提示来收集真实世界自我报告的EMA。小规模研究表明,μEMA在长达4周的时间内比EMA产生更高的应答率。在为期12个月的研究中,我们评估了μEMA的纵向接合性。每位参与者都完成了EMA调查(96天内每小时一次智能手机提示,连续4天)和μEMA调查(270天内每小时4次智能手表提示)。使用177名参与者(137万μEMA和14.9K EMA调查)的数据,我们比较了三组参与者的参与度:完成12个月EMA数据收集的人(完成),在6个月EMA数据收集后自愿退出的人(退出),以及在6个月EMA反应率不佳后工作人员未登记的人(未登记)。与EMA相比,未参加受试者的应答率为2.25倍,退出受试者的应答率为1.65倍,完成受试者的应答率为1.53倍(p < 0.001)。无论应答率如何,μEMA被认为比EMA负担更轻(p < 0.001)。这些结果表明μEMA是密集纵向数据收集的可行方法,特别是对于发现EMA不可持续的参与者。
{"title":"Longitudinal User Engagement with Microinteraction Ecological Momentary Assessment (μEMA).","authors":"Aditya Ponnada, Shirlene D Wang, Jixin Li, Wei-Lin Wang, Genevieve F Dunton, Donald Hedeker, Stephen S Intille","doi":"10.1145/3749541","DOIUrl":"10.1145/3749541","url":null,"abstract":"<p><p>Microinteraction ecological momentary assessment (μEMA) is a type of EMA that uses single-question prompts on a smartwatch to collect real-world self-reports. Smaller-scale studies show that μEMA yields higher response rates than EMA for up to 4 weeks. In this paper, we evaluated μEMA's longitudinal engagement in a 12-month study. Each participant completed EMA surveys (one smartphone prompt/hour for 96 days in 4-day bursts) and μEMA surveys (four smartwatch prompts/hour for the 270 days). Using data from 177 participants ( 1.37 million μEMA and 14.9K EMA surveys), we compared engagement across three groups: those who completed 12 months of EMA data collection(<i>Completed</i>), those who voluntarily withdrew after six months of EMA data collection (<i>Withdrew</i>), and those unenrolled by staff after six months of poor EMA response rates (<i>Unenrolled</i>). Compared to EMA, unenrolled participants were 2.25 times, those who withdrew were 1.65 times, and completed participants were 1.53 times more likely to answer μEMA prompts (<i>p</i> < 0.001). Regardless of response rates, μEMA was perceived as less burdensome than EMA (<i>p</i> < 0.001). These results suggest μEMA is a viable method for intensive longitudinal data collection, particularly for participants who find EMA unsustainable.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"9 3","pages":""},"PeriodicalIF":4.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439519/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145081395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NavGraph: Enhancing Blind Travelers' Navigation Experience and Safety. NavGraph:增强盲人旅行者的导航体验和安全。
IF 4.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-09-01 Epub Date: 2025-09-03 DOI: 10.1145/3749537
Sergio Mascetti, Dragan Ahmetovic, Gabriele Galimberti, James M Coughlan

Independent navigation remains a significant challenge for blind and low vision individuals, especially in unfamiliar environments. In this paper, we introduce the Parsimonious Instructions design principle, which aims to enhance navigation safety while minimizing the number of instructions delivered to the user. We demonstrate the application of this principle through NavGraph, a navigation application adopting a modular architecture comprising four components: localization, routing, guidance, and user interface. NavGraph is designed to provide effective, non-intrusive navigation assistance by optimizing route computation and instruction delivery. We evaluated NavGraph in a user study with 10 blind participants, comparing it to a baseline solution. Results show that NavGraph significantly reduces the number of instructions and improves clarity and safety, without compromising navigation time. These findings support the potential of the Parsimonious Instructions design principle in assistive navigation technologies.

对于盲人和视力低下的人来说,独立导航仍然是一个重大挑战,尤其是在不熟悉的环境中。在本文中,我们引入了简约指令设计原则,该原则旨在提高导航安全性,同时最小化向用户发送的指令数量。我们通过NavGraph演示了这一原理的应用,NavGraph是一个采用模块化架构的导航应用程序,包括四个组件:定位、路由、引导和用户界面。NavGraph旨在通过优化路线计算和指令传递来提供有效的非侵入式导航辅助。我们在10名盲人参与者的用户研究中评估了NavGraph,并将其与基线解决方案进行了比较。结果表明,在不影响导航时间的情况下,NavGraph显著减少了指令数量,提高了清晰度和安全性。这些发现支持了简约指令设计原则在辅助导航技术中的潜力。
{"title":"NavGraph: Enhancing Blind Travelers' Navigation Experience and Safety.","authors":"Sergio Mascetti, Dragan Ahmetovic, Gabriele Galimberti, James M Coughlan","doi":"10.1145/3749537","DOIUrl":"10.1145/3749537","url":null,"abstract":"<p><p>Independent navigation remains a significant challenge for blind and low vision individuals, especially in unfamiliar environments. In this paper, we introduce the Parsimonious Instructions design principle, which aims to enhance navigation safety while minimizing the number of instructions delivered to the user. We demonstrate the application of this principle through NavGraph, a navigation application adopting a modular architecture comprising four components: localization, routing, guidance, and user interface. NavGraph is designed to provide effective, non-intrusive navigation assistance by optimizing route computation and instruction delivery. We evaluated NavGraph in a user study with 10 blind participants, comparing it to a baseline solution. Results show that NavGraph significantly reduces the number of instructions and improves clarity and safety, without compromising navigation time. These findings support the potential of the Parsimonious Instructions design principle in assistive navigation technologies.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"9 3","pages":"117"},"PeriodicalIF":4.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12682350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145709761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ask Less, Learn More: Adapting Ecological Momentary Assessment Survey Length by Modeling Question-Answer Information Gain. 少问多学:通过模拟问答信息增益来调整生态瞬时评估调查长度。
IF 3.6 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 Epub Date: 2024-11-21 DOI: 10.1145/3699735
Jixin Li, Aditya Ponnada, Wei-Lin Wang, Genevieve F Dunton, Stephen S Intille

Ecological momentary assessment (EMA) is an approach to collect self-reported data repeatedly on mobile devices in natural settings. EMAs allow for temporally dense, ecologically valid data collection, but frequent interruptions with lengthy surveys on mobile devices can burden users, impacting compliance and data quality. We propose a method that reduces the length of each EMA question set measuring interrelated constructs, with only modest information loss. By estimating the potential information gain of each EMA question using question-answer prediction models, this method can prioritize the presentation of the most informative question in a question-by-question sequence and skip uninformative questions. We evaluated the proposed method by simulating question omission using four real-world datasets from three different EMA studies. When compared against the random question omission approach that skips 50% of the questions, our method reduces imputation errors by 15%-52%. In surveys with five answer options for each question, our method can reduce the mean survey length by 34%-56% with a real-time prediction accuracy of 72%-95% for the skipped questions. The proposed method may either allow more constructs to be surveyed without adding user burden or reduce response burden for more sustainable longitudinal EMA data collection.

生态瞬间评估(EMA)是一种在自然环境下的移动设备上反复收集自我报告数据的方法。ema允许临时密集、生态有效的数据收集,但在移动设备上频繁中断冗长的调查可能会给用户带来负担,影响合规性和数据质量。我们提出了一种方法,减少每个EMA问题集的长度,测量相关的结构,只有适度的信息损失。通过使用问答预测模型估计每个EMA问题的潜在信息增益,该方法可以在逐个问题的序列中优先呈现信息最多的问题,并跳过信息不足的问题。我们通过使用来自三个不同的EMA研究的四个真实数据集模拟问题遗漏来评估所提出的方法。与随机问题省略方法(跳过50%的问题)相比,我们的方法将输入误差降低了15%-52%。在每个问题有5个答案选项的调查中,我们的方法可以将平均调查长度减少34%-56%,对跳过的问题的实时预测精度为72%-95%。建议的方法可以允许在不增加用户负担的情况下调查更多的构造,或者减少响应负担,以便更可持续的纵向EMA数据收集。
{"title":"Ask Less, Learn More: Adapting Ecological Momentary Assessment Survey Length by Modeling Question-Answer Information Gain.","authors":"Jixin Li, Aditya Ponnada, Wei-Lin Wang, Genevieve F Dunton, Stephen S Intille","doi":"10.1145/3699735","DOIUrl":"10.1145/3699735","url":null,"abstract":"<p><p>Ecological momentary assessment (EMA) is an approach to collect self-reported data repeatedly on mobile devices in natural settings. EMAs allow for temporally dense, ecologically valid data collection, but frequent interruptions with lengthy surveys on mobile devices can burden users, impacting compliance and data quality. We propose a method that reduces the length of each EMA question set measuring interrelated constructs, with only modest information loss. By estimating the potential information gain of each EMA question using question-answer prediction models, this method can prioritize the presentation of the most informative question in a question-by-question sequence and skip uninformative questions. We evaluated the proposed method by simulating question omission using four real-world datasets from three different EMA studies. When compared against the random question omission approach that skips 50% of the questions, our method reduces imputation errors by 15%-52%. In surveys with five answer options for each question, our method can reduce the mean survey length by 34%-56% with a real-time prediction accuracy of 72%-95% for the skipped questions. The proposed method may either allow more constructs to be surveyed without adding user burden or reduce response burden for more sustainable longitudinal EMA data collection.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 4","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633767/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Detection: Towards Actionable Sensing Research in Clinical Mental Healthcare. 超越检测:走向临床心理保健的可操作传感研究。
IF 3.6 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 Epub Date: 2024-11-21 DOI: 10.1145/3699755
Daniel A Adler, Yuewen Yang, Thalia Viranda, Xuhai Xu, David C Mohr, Anna R VAN Meter, Julia C Tartaglia, Nicholas C Jacobson, Fei Wang, Deborah Estrin, Tanzeem Choudhury

Researchers in ubiquitous computing have long promised that passive sensing will revolutionize mental health measurement by detecting individuals in a population experiencing a mental health disorder or specific symptoms. Recent work suggests that detection tools do not generalize well when trained and tested in more heterogeneous samples. In this work, we contribute a narrative review and findings from two studies with 41 mental health clinicians to understand these generalization challenges. Our findings motivate research on actionable sensing, as an alternative to detection research, studying how passive sensing can augment traditional mental health measures to support actions in clinical care. Specifically, we identify how passive sensing can support clinical actions by revealing patients' presenting problems for treatment and identifying targets for behavior change and symptom reduction, but passive data requires additional contextual information to be appropriately interpreted and used in care. We conclude by suggesting research at the intersection of actionable sensing and mental healthcare, to align technical research in ubiquitous computing with clinical actions and needs.

普适计算领域的研究人员长期以来一直承诺,被动感知将通过检测人群中出现心理健康障碍或特定症状的个体,彻底改变心理健康测量方法。最近的研究表明,当在更多的异质样本中训练和测试时,检测工具不能很好地泛化。在这项工作中,我们对41名心理健康临床医生的两项研究进行了叙述回顾和研究结果,以了解这些泛化挑战。我们的研究结果激发了可操作感知的研究,作为检测研究的替代方案,研究被动感知如何增强传统的心理健康措施,以支持临床护理中的行动。具体来说,我们确定了被动感知如何通过揭示患者提出的治疗问题和确定行为改变和症状减轻的目标来支持临床行动,但被动数据需要额外的上下文信息来适当地解释和在护理中使用。最后,我们建议在可操作的传感和精神卫生保健的交叉点进行研究,使普适计算的技术研究与临床行动和需求保持一致。
{"title":"Beyond Detection: Towards Actionable Sensing Research in Clinical Mental Healthcare.","authors":"Daniel A Adler, Yuewen Yang, Thalia Viranda, Xuhai Xu, David C Mohr, Anna R VAN Meter, Julia C Tartaglia, Nicholas C Jacobson, Fei Wang, Deborah Estrin, Tanzeem Choudhury","doi":"10.1145/3699755","DOIUrl":"10.1145/3699755","url":null,"abstract":"<p><p>Researchers in ubiquitous computing have long promised that passive sensing will revolutionize mental health measurement by detecting individuals in a population experiencing a mental health disorder or specific symptoms. Recent work suggests that detection tools do not generalize well when trained and tested in more heterogeneous samples. In this work, we contribute a narrative review and findings from two studies with 41 mental health clinicians to understand these generalization challenges. Our findings motivate research on actionable sensing, as an alternative to detection research, studying how passive sensing can augment traditional mental health measures to support actions in clinical care. Specifically, we identify how passive sensing can support clinical actions by revealing patients' presenting problems for treatment and identifying targets for behavior change and symptom reduction, but passive data requires additional contextual information to be appropriately interpreted and used in care. We conclude by suggesting research at the intersection of actionable sensing and mental healthcare, to align technical research in ubiquitous computing with clinical actions and needs.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 4","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11620792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences. MindScape研究:整合法学硕士和行为感知的个性化人工智能驱动的日志体验。
IF 3.6 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-01 Epub Date: 2024-11-21 DOI: 10.1145/3699761
Subigya Nepal, Arvind Pillai, William Campbell, Talie Massachi, Michael V Heinz, Ashmita Kunwar, Eunsol Soul Choi, Xuhai Xu, Joanna Kuc, Jeremy F Huckins, Jason Holden, Sarah M Preum, Colin Depp, Nicholas Jacobson, Mary P Czerwinski, Eric Granholm, Andrew T Campbell

Mental health concerns are prevalent among college students, highlighting the need for effective interventions that promote self-awareness and holistic well-being. MindScape explores a novel approach to AI-powered journaling by integrating passively collected behavioral patterns such as conversational engagement, sleep, and location with Large Language Models (LLMs). This integration creates a highly personalized and context-aware journaling experience, enhancing self-awareness and well-being by embedding behavioral intelligence into AI. We present an 8-week exploratory study with 20 college students, demonstrating the MindScape app's efficacy in enhancing positive affect (7%), reducing negative affect (11%), loneliness (6%), and anxiety and depression, with a significant week-over-week decrease in PHQ-4 scores (-0.25 coefficient). The study highlights the advantages of contextual AI journaling, with participants particularly appreciating the tailored prompts and insights provided by the MindScape app. Our analysis also includes a comparison of responses to AI-driven contextual versus generic prompts, participant feedback insights, and proposed strategies for leveraging contextual AI journaling to improve well-being on college campuses. By showcasing the potential of contextual AI journaling to support mental health, we provide a foundation for further investigation into the effects of contextual AI journaling on mental health and well-being.

心理健康问题在大学生中普遍存在,强调需要有效的干预措施,促进自我意识和整体福祉。MindScape通过将被动收集的行为模式(如对话参与、睡眠和位置)与大型语言模型(llm)相结合,探索了一种新的人工智能日志记录方法。这种整合创造了高度个性化和情境感知的日志体验,通过将行为智能嵌入人工智能,增强了自我意识和幸福感。我们对20名大学生进行了为期8周的探索性研究,证明了MindScape应用程序在增强积极情绪(7%),减少消极情绪(11%),孤独感(6%),焦虑和抑郁方面的功效,PHQ-4分数每周显著下降(-0.25系数)。该研究强调了上下文人工智能日志的优势,参与者特别欣赏MindScape应用程序提供的量身定制的提示和见解。我们的分析还包括对人工智能驱动的上下文提示与通用提示的响应比较,参与者反馈的见解,以及利用上下文人工智能日志改善大学校园幸福感的建议策略。通过展示上下文人工智能日志支持心理健康的潜力,我们为进一步研究上下文人工智能日志对心理健康和福祉的影响奠定了基础。
{"title":"MindScape Study: Integrating LLM and Behavioral Sensing for Personalized AI-Driven Journaling Experiences.","authors":"Subigya Nepal, Arvind Pillai, William Campbell, Talie Massachi, Michael V Heinz, Ashmita Kunwar, Eunsol Soul Choi, Xuhai Xu, Joanna Kuc, Jeremy F Huckins, Jason Holden, Sarah M Preum, Colin Depp, Nicholas Jacobson, Mary P Czerwinski, Eric Granholm, Andrew T Campbell","doi":"10.1145/3699761","DOIUrl":"10.1145/3699761","url":null,"abstract":"<p><p>Mental health concerns are prevalent among college students, highlighting the need for effective interventions that promote self-awareness and holistic well-being. MindScape explores a novel approach to AI-powered journaling by integrating passively collected behavioral patterns such as conversational engagement, sleep, and location with Large Language Models (LLMs). This integration creates a highly personalized and context-aware journaling experience, enhancing self-awareness and well-being by embedding behavioral intelligence into AI. We present an 8-week exploratory study with 20 college students, demonstrating the MindScape app's efficacy in enhancing positive affect (7%), reducing negative affect (11%), loneliness (6%), and anxiety and depression, with a significant week-over-week decrease in PHQ-4 scores (-0.25 coefficient). The study highlights the advantages of contextual AI journaling, with participants particularly appreciating the tailored prompts and insights provided by the MindScape app. Our analysis also includes a comparison of responses to AI-driven contextual versus generic prompts, participant feedback insights, and proposed strategies for leveraging contextual AI journaling to improve well-being on college campuses. By showcasing the potential of contextual AI journaling to support mental health, we provide a foundation for further investigation into the effects of contextual AI journaling on mental health and well-being.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 4","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11634059/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Representation Learning and Temporal-Spectral Feature Fusion for Bed Occupancy Detection. 基于自监督表征学习和时谱特征融合的床位占用检测。
IF 4.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-09-09 DOI: 10.1145/3678514
Yingjian Song, Zaid Farooq Pitafi, Fei Dou, Jin Sun, Xiang Zhang, Bradley G Phillips, Wenzhan Song

In automated sleep monitoring systems, bed occupancy detection is the foundation or the first step before other downstream tasks, such as inferring sleep activities and vital signs. The existing methods do not generalize well to real-world environments due to single environment settings and rely on threshold-based approaches. Manually selecting thresholds requires observing a large amount of data and may not yield optimal results. In contrast, acquiring extensive labeled sensory data poses significant challenges regarding cost and time. Hence, developing models capable of generalizing across diverse environments with limited data is imperative. This paper introduces SeismoDot, which consists of a self-supervised learning module and a spectral-temporal feature fusion module for bed occupancy detection. Unlike conventional methods that require separate pre-training and fine-tuning, our self-supervised learning module is co-optimized with the primary target task, which directs learned representations toward a task-relevant embedding space while expanding the feature space. The proposed feature fusion module enables the simultaneous exploitation of temporal and spectral features, enhancing the diversity of information from both domains. By combining these techniques, SeismoDot expands the diversity of embedding space for both the temporal and spectral domains to enhance its generalizability across different environments. SeismoDot not only achieves high accuracy (98.49%) and F1 scores (98.08%) across 13 diverse environments, but it also maintains high performance (97.01% accuracy and 96.54% F1 score) even when trained with just 20% (4 days) of the total data. This demonstrates its exceptional ability to generalize across various environmental settings, even with limited data availability.

在自动化睡眠监测系统中,床位占用率检测是其他下游任务(如推断睡眠活动和生命体征)的基础或第一步。由于环境设置单一,现有方法依赖于基于阈值的方法,不能很好地推广到现实环境。手动选择阈值需要观察大量数据,并且可能无法产生最佳结果。相比之下,获取广泛的标记感官数据在成本和时间方面提出了重大挑战。因此,开发能够使用有限数据在不同环境中泛化的模型是必要的。本文介绍了用于床位占用检测的SeismoDot,它由一个自监督学习模块和一个光谱-时间特征融合模块组成。与需要单独预训练和微调的传统方法不同,我们的自监督学习模块与主要目标任务协同优化,在扩展特征空间的同时,将学习到的表征指向任务相关的嵌入空间。所提出的特征融合模块能够同时利用时间和光谱特征,增强两个领域信息的多样性。通过结合这些技术,SeismoDot扩展了时间域和频谱域嵌入空间的多样性,增强了其在不同环境中的通用性。SeismoDot不仅在13种不同的环境中实现了高准确率(98.49%)和F1分数(98.08%),而且即使在仅使用总数据的20%(4天)进行训练时,它也保持了高性能(97.01%的准确率和96.54%的F1分数)。这证明了它在各种环境设置中进行泛化的卓越能力,即使数据可用性有限。
{"title":"Self-Supervised Representation Learning and Temporal-Spectral Feature Fusion for Bed Occupancy Detection.","authors":"Yingjian Song, Zaid Farooq Pitafi, Fei Dou, Jin Sun, Xiang Zhang, Bradley G Phillips, Wenzhan Song","doi":"10.1145/3678514","DOIUrl":"10.1145/3678514","url":null,"abstract":"<p><p>In automated sleep monitoring systems, bed occupancy detection is the foundation or the first step before other downstream tasks, such as inferring sleep activities and vital signs. The existing methods do not generalize well to real-world environments due to single environment settings and rely on threshold-based approaches. Manually selecting thresholds requires observing a large amount of data and may not yield optimal results. In contrast, acquiring extensive labeled sensory data poses significant challenges regarding cost and time. Hence, developing models capable of generalizing across diverse environments with limited data is imperative. This paper introduces SeismoDot, which consists of a self-supervised learning module and a spectral-temporal feature fusion module for bed occupancy detection. Unlike conventional methods that require separate pre-training and fine-tuning, our self-supervised learning module is co-optimized with the primary target task, which directs learned representations toward a task-relevant embedding space while expanding the feature space. The proposed feature fusion module enables the simultaneous exploitation of temporal and spectral features, enhancing the diversity of information from both domains. By combining these techniques, SeismoDot expands the diversity of embedding space for both the temporal and spectral domains to enhance its generalizability across different environments. SeismoDot not only achieves high accuracy (98.49%) and F1 scores (98.08%) across 13 diverse environments, but it also maintains high performance (97.01% accuracy and 96.54% F1 score) even when trained with just 20% (4 days) of the total data. This demonstrates its exceptional ability to generalize across various environmental settings, even with limited data availability.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 3","pages":""},"PeriodicalIF":4.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906163/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143650046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HabitSense: A Privacy-Aware, AI-Enhanced Multimodal Wearable Platform for mHealth Applications. HabitSense:一个隐私意识,人工智能增强的多模式可穿戴移动健康应用平台。
IF 3.6 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-09-09 DOI: 10.1145/3678591
Glenn J Fernandes, Jiayi Zheng, Mahdi Pedram, Christopher Romano, Farzad Shahabi, Blaine Rothrock, Thomas Cohen, Helen Zhu, Tanmeet S Butani, Josiah Hester, Aggelos K Katsaggelos, Nabil Alshurafa

Wearable cameras provide an objective method to visually confirm and automate the detection of health-risk behaviors such as smoking and overeating, which is critical for developing and testing adaptive treatment interventions. Despite the potential of wearable camera systems, adoption is hindered by inadequate clinician input in the design, user privacy concerns, and user burden. To address these barriers, we introduced HabitSense, an open-source, multi-modal neck-worn platform developed with input from focus groups with clinicians (N=36) and user feedback from in-wild studies involving 105 participants over 35 days. Optimized for monitoring health-risk behaviors, the platform utilizes RGB, thermal, and inertial measurement unit sensors to detect eating and smoking events in real time. In a 7-day study involving 15 participants, HabitSense recorded 768 hours of footage, capturing 420.91 minutes of hand-to-mouth gestures associated with eating and smoking data crucial for training machine learning models, achieving a 92% F1-score in gesture recognition. To address privacy concerns, the platform records only during likely health-risk behavior events using SECURE, a smart activation algorithm. Additionally, HabitSense employs on-device obfuscation algorithms that selectively obfuscate the background during recording, maintaining individual privacy while leaving gestures related to health-risk behaviors unobfuscated. Our implementation of SECURE has resulted in a 48% reduction in storage needs and a 30% increase in battery life. This paper highlights the critical roles of clinician feedback, extensive field testing, and privacy-enhancing algorithms in developing an unobtrusive, lightweight, and reproducible wearable system that is both feasible and acceptable for monitoring health-risk behaviors in real-world settings.

可穿戴摄像头提供了一种客观的方法,可以直观地确认和自动检测吸烟和暴饮暴食等健康风险行为,这对于开发和测试适应性治疗干预措施至关重要。尽管可穿戴相机系统具有潜力,但由于临床医生在设计方面的投入不足、用户隐私问题和用户负担,其采用受到阻碍。为了解决这些障碍,我们引入了HabitSense,这是一个开源的、多模式的颈戴式平台,根据临床医生(N=36)的焦点小组的输入和来自105名参与者超过35天的野外研究的用户反馈开发的。该平台针对监测健康风险行为进行了优化,利用RGB、热和惯性测量单元传感器实时检测饮食和吸烟事件。在一项涉及15名参与者的为期7天的研究中,HabitSense记录了768小时的镜头,捕捉了420.91分钟的与吃饭和吸烟相关的手对嘴的手势,这些手势对训练机器学习模型至关重要,在手势识别方面获得了92%的f1分。为了解决隐私问题,该平台仅使用SECURE(一种智能激活算法)记录可能危害健康的行为事件。此外,HabitSense还采用了设备上的混淆算法,可以在记录过程中选择性地混淆背景,在保持个人隐私的同时,不混淆与健康风险行为相关的手势。我们对SECURE的实施使存储需求减少了48%,电池寿命延长了30%。本文强调了临床医生反馈、广泛的现场测试和隐私增强算法在开发一种不引人注目的、轻量级的、可重复的可穿戴系统中的关键作用,该系统既可行又可接受,可用于监测现实环境中的健康风险行为。
{"title":"HabitSense: A Privacy-Aware, AI-Enhanced Multimodal Wearable Platform for mHealth Applications.","authors":"Glenn J Fernandes, Jiayi Zheng, Mahdi Pedram, Christopher Romano, Farzad Shahabi, Blaine Rothrock, Thomas Cohen, Helen Zhu, Tanmeet S Butani, Josiah Hester, Aggelos K Katsaggelos, Nabil Alshurafa","doi":"10.1145/3678591","DOIUrl":"10.1145/3678591","url":null,"abstract":"<p><p>Wearable cameras provide an objective method to visually confirm and automate the detection of health-risk behaviors such as smoking and overeating, which is critical for developing and testing adaptive treatment interventions. Despite the potential of wearable camera systems, adoption is hindered by inadequate clinician input in the design, user privacy concerns, and user burden. To address these barriers, we introduced HabitSense, an open-source, multi-modal neck-worn platform developed with input from focus groups with clinicians (N=36) and user feedback from in-wild studies involving 105 participants over 35 days. Optimized for monitoring health-risk behaviors, the platform utilizes RGB, thermal, and inertial measurement unit sensors to detect eating and smoking events in real time. In a 7-day study involving 15 participants, HabitSense recorded 768 hours of footage, capturing 420.91 minutes of hand-to-mouth gestures associated with eating and smoking data crucial for training machine learning models, achieving a 92% F1-score in gesture recognition. To address privacy concerns, the platform records only during likely health-risk behavior events using SECURE, a smart activation algorithm. Additionally, HabitSense employs on-device obfuscation algorithms that selectively obfuscate the background during recording, maintaining individual privacy while leaving gestures related to health-risk behaviors unobfuscated. Our implementation of SECURE has resulted in a 48% reduction in storage needs and a 30% increase in battery life. This paper highlights the critical roles of clinician feedback, extensive field testing, and privacy-enhancing algorithms in developing an unobtrusive, lightweight, and reproducible wearable system that is both feasible and acceptable for monitoring health-risk behaviors in real-world settings.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 3","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143557798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collecting Self-reported Physical Activity and Posture Data Using Audio-based Ecological Momentary Assessment. 使用基于音频的生态瞬间评估收集自我报告的身体活动和姿势数据。
IF 4.5 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-09-09 DOI: 10.1145/3678584
H A LE, Rithika Lakshminarayanan, Jixin Li, Varun Mishra, Stephen Intille

μEMA is a data collection method that prompts research participants with quick, answer-at-a-glance, single-multiple-choice self-report behavioral questions, thus enabling high-temporal-density self-report of up to four times per hour when implemented on a smartwatch. However, due to the small watch screen, μEMA is better used to select among 2 to 5 multiple-choice answers versus allowing the collection of open-ended responses. We introduce an alternative and novel form of micro-interaction self-report using speech input - audio-μEMA- where a short beep or vibration cues participants to verbally report their behavioral states, allowing for open-ended, temporally dense self-reports. We conducted a one-hour usability study followed by a within-subject, 6-day to 21-day free-living feasibility study in which participants self-reported their physical activities and postures once every 2 to 5 minutes. We qualitatively explored the usability of the system and identified factors impacting the response rates of this data collection method. Despite being interrupted 12 to 20 times per hour, participants in the free-living study were highly engaged with the system, with an average response rate of 67.7% for audio-μEMA for up to 14 days. We discuss the factors that impacted feasibility; some implementation, methodological, and participant challenges we observed; and important considerations relevant to deploying audio-μEMA in real-time activity recognition systems.

μEMA是一种数据收集方法,它向研究参与者提出快速、一眼就能回答的单选多项自我报告行为问题,从而在智能手表上实现每小时多达4次的高时间密度自我报告。然而,由于手表屏幕较小,μEMA更适合在2至5个选择题中进行选择,而不是允许收集开放式答案。我们介绍了一种使用语音输入的微交互自我报告的替代和新颖形式-音频μ ema -其中一个短的蜂鸣声或振动提示参与者口头报告他们的行为状态,允许开放式,时间密集的自我报告。我们进行了一小时的可用性研究,随后进行了一项为期6天至21天的自由生活可行性研究,参与者每2至5分钟自我报告一次他们的身体活动和姿势。我们定性地探讨了系统的可用性,并确定了影响这种数据收集方法的回复率的因素。尽管每小时被打断12到20次,但自由生活研究的参与者对该系统的参与度很高,音频μ ema的平均响应率为67.7%,持续时间长达14天。我们讨论了影响可行性的因素;我们观察到一些实施、方法和参与者方面的挑战;以及在实时活动识别系统中部署音频μ ema的重要考虑事项。
{"title":"Collecting Self-reported Physical Activity and Posture Data Using Audio-based Ecological Momentary Assessment.","authors":"H A LE, Rithika Lakshminarayanan, Jixin Li, Varun Mishra, Stephen Intille","doi":"10.1145/3678584","DOIUrl":"10.1145/3678584","url":null,"abstract":"<p><p><i>μ</i>EMA is a data collection method that prompts research participants with quick, answer-at-a-glance, single-multiple-choice self-report behavioral questions, thus enabling high-temporal-density self-report of up to four times per hour when implemented on a smartwatch. However, due to the small watch screen, <i>μ</i>EMA is better used to select among 2 to 5 multiple-choice answers versus allowing the collection of open-ended responses. We introduce an alternative and novel form of micro-interaction self-report using speech input - audio-<i>μ</i>EMA- where a short beep or vibration cues participants to verbally report their behavioral states, allowing for open-ended, <i>temporally dense</i> self-reports. We conducted a one-hour usability study followed by a within-subject, 6-day to 21-day free-living feasibility study in which participants self-reported their physical activities and postures once every 2 to 5 minutes. We qualitatively explored the usability of the system and identified factors impacting the response rates of this data collection method. Despite being interrupted 12 to 20 times per hour, participants in the free-living study were highly engaged with the system, with an average response rate of 67.7% for audio-<i>μ</i>EMA for up to 14 days. We discuss the factors that impacted feasibility; some implementation, methodological, and participant challenges we observed; and important considerations relevant to deploying audio-<i>μ</i>EMA in real-time activity recognition systems.</p>","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"8 3","pages":""},"PeriodicalIF":4.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12573594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1