首页 > 最新文献

Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference最新文献

英文 中文
Deploying and Examining Beacon for At-Home Patient Self-Monitoring with Critical Flicker Frequency. 利用临界闪烁频率部署和检测信标用于家庭患者自我监测。
Pub Date : 2025-05-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3714240
Richard Li, Philip Vutien, Sabrina Omer, Michael Yacoub, George Ioannou, Ravi Karkar, Sean A Munson, James Fogarty

Chronic liver disease can lead to neurological conditions that result in coma or death. Although early detection can allow for intervention, testing is infrequent and unstandardized. Beacon is a device for at-home patient self-measurement of cognitive function via critical flicker frequency, which is the frequency at which a flickering light appears steady to an observer. This paper presents our efforts in iterating on Beacon's hardware and software to enable at-home use, then reports on an at-home deployment with 21 patients taking measurements over 6 weeks. We found that measurements were stable despite being taken at different times and in different environments. Finally, through interviews with 15 patients and 5 hepatologists, we report on participant experiences with Beacon, preferences around how CFF data should be presented, and the role of caregivers in helping patients manage their condition. Informed by our experiences with Beacon, we further discuss design implications for home health devices.

慢性肝病可导致神经系统疾病,导致昏迷或死亡。虽然早期发现可以进行干预,但检测频率低且不标准化。Beacon是一种通过临界闪烁频率(即闪烁光在观察者看来稳定的频率)让居家患者自我测量认知功能的设备。本文介绍了我们对Beacon的硬件和软件进行迭代的努力,使其能够在家中使用,然后报告了21名患者在6周内进行的家庭部署。我们发现,尽管在不同的时间和不同的环境中进行测量,但测量结果是稳定的。最后,通过对15名患者和5名肝病学家的访谈,我们报告了参与者使用Beacon的经验,对CFF数据如何呈现的偏好,以及护理人员在帮助患者管理病情方面的作用。根据我们在Beacon方面的经验,我们进一步讨论了家庭健康设备的设计含义。
{"title":"Deploying and Examining Beacon for At-Home Patient Self-Monitoring with Critical Flicker Frequency.","authors":"Richard Li, Philip Vutien, Sabrina Omer, Michael Yacoub, George Ioannou, Ravi Karkar, Sean A Munson, James Fogarty","doi":"10.1145/3706598.3714240","DOIUrl":"10.1145/3706598.3714240","url":null,"abstract":"<p><p>Chronic liver disease can lead to neurological conditions that result in coma or death. Although early detection can allow for intervention, testing is infrequent and unstandardized. Beacon is a device for at-home patient self-measurement of cognitive function via critical flicker frequency, which is the frequency at which a flickering light appears steady to an observer. This paper presents our efforts in iterating on Beacon's hardware and software to enable at-home use, then reports on an at-home deployment with 21 patients taking measurements over 6 weeks. We found that measurements were stable despite being taken at different times and in different environments. Finally, through interviews with 15 patients and 5 hepatologists, we report on participant experiences with Beacon, preferences around how CFF data should be presented, and the role of caregivers in helping patients manage their condition. Informed by our experiences with Beacon, we further discuss design implications for home health devices.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12165253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility and Utility of Multimodal Micro Ecological Momentary Assessment on a Smartwatch. 智能手表多模态微生态瞬间评估的可行性与实用性。
Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3714086
Ha Le, Veronika Potter, Rithika Lakshminarayanan, Varun Mishra, Stephen Intille

μEMAs allow participants to answer a short survey quickly with a tap on a smartwatch screen or a brief speech input. The short interaction time and low cognitive burden enable researchers to collect self-reports at high frequency (once every 5-15 minutes) while maintaining participant engagement. Systems with single input modality, however, may carry different contextual biases that could affect compliance. We combined two input modalities to create a multimodal-μEMA system, allowing participants to choose between speech or touch input to self-report. To investigate system usability, we conducted a seven-day field study where we asked 20 participants to label their posture and/or physical activity once every five minutes throughout their waking day. Despite the intense prompting interval, participants responded to 72.4% of the prompts. We found participants gravitated towards different modalities based on personal preferences and contextual states, highlighting the need to consider these factors when designing context-aware multimodal μEMA systems.

μEMAs允许参与者通过点击智能手表屏幕或简短的语音输入来快速回答简短的调查。交互时间短,认知负担低,使研究人员能够在保持参与者参与度的同时,以高频率(每5-15分钟一次)收集自我报告。然而,具有单一输入模式的系统可能会携带不同的上下文偏差,从而影响遵从性。我们结合两种输入方式创建了一个多模态μ ema系统,允许参与者在语音或触摸输入之间选择自我报告。为了调查系统的可用性,我们进行了为期7天的实地研究,我们要求20名参与者在他们醒着的一天中每五分钟标记一次他们的姿势和/或身体活动。尽管有强烈的提示间隔,参与者对72.4%的提示做出了回应。我们发现参与者根据个人偏好和上下文状态倾向于不同的模式,强调在设计上下文感知的多模态μEMA系统时需要考虑这些因素。
{"title":"Feasibility and Utility of Multimodal Micro Ecological Momentary Assessment on a Smartwatch.","authors":"Ha Le, Veronika Potter, Rithika Lakshminarayanan, Varun Mishra, Stephen Intille","doi":"10.1145/3706598.3714086","DOIUrl":"10.1145/3706598.3714086","url":null,"abstract":"<p><p><i>μ</i>EMAs allow participants to answer a short survey quickly with a tap on a smartwatch screen or a brief speech input. The short interaction time and low cognitive burden enable researchers to collect self-reports at high frequency (once every 5-15 minutes) while maintaining participant engagement. Systems with single input modality, however, may carry different contextual biases that could affect compliance. We combined two input modalities to create a multimodal-<i>μ</i>EMA system, allowing participants to choose between speech or touch input to self-report. To investigate system usability, we conducted a seven-day field study where we asked 20 participants to label their posture and/or physical activity once every five minutes throughout their waking day. Despite the intense prompting interval, participants responded to 72.4% of the prompts. We found participants gravitated towards different modalities based on personal preferences and contextual states, highlighting the need to consider these factors when designing context-aware multimodal <i>μ</i>EMA systems.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12718675/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145812428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Technologies for Value-based Mental Healthcare: Centering Clinicians' Perspectives on Outcomes Data Specification, Collection, and Use. 设计基于价值的精神卫生保健技术:以临床医生对结果数据规范、收集和使用的观点为中心。
Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3713481
Daniel A Adler, Yuewen Yang, Thalia Viranda, Anna R Van Meter, Emma Elizabeth McGinty, Tanzeem Choudhury

Health information technologies are transforming how mental healthcare is paid for through value-based care programs, which tie payment to data quantifying care outcomes. But, it is unclear what outcomes data these technologies should store, how to engage users in data collection, and how outcomes data can improve care. Given these challenges, we conducted interviews with 30 U.S.-based mental health clinicians to explore the design space of health information technologies that support outcomes data specification, collection, and use in value-based mental healthcare. Our findings center clinicians' perspectives on aligning outcomes data for payment programs and care; opportunities for health technologies and personal devices to improve data collection; and considerations for using outcomes data to hold stakeholders including clinicians, health insurers, and social services financially accountable in value-based mental healthcare. We conclude with implications for future research designing and developing technologies supporting value-based care across stakeholders involved with mental health service delivery.

卫生信息技术正在通过基于价值的护理项目改变精神卫生保健的支付方式,该项目将支付与量化护理结果的数据联系起来。但是,目前尚不清楚这些技术应该存储哪些结果数据,如何让用户参与数据收集,以及结果数据如何改善护理。考虑到这些挑战,我们对30位美国心理健康临床医生进行了访谈,以探索健康信息技术的设计空间,这些技术支持结果数据规范、收集和基于价值的心理健康保健的使用。我们的研究结果集中在临床医生对调整支付方案和护理结果数据的观点;利用卫生技术和个人设备改进数据收集的机会;并考虑使用结果数据使包括临床医生、健康保险公司和社会服务机构在内的利益相关者在基于价值的精神卫生保健中承担财务责任。我们总结了对未来研究的启示,设计和开发支持跨利益相关者参与精神卫生服务提供的基于价值的护理的技术。
{"title":"Designing Technologies for Value-based Mental Healthcare: Centering Clinicians' Perspectives on Outcomes Data Specification, Collection, and Use.","authors":"Daniel A Adler, Yuewen Yang, Thalia Viranda, Anna R Van Meter, Emma Elizabeth McGinty, Tanzeem Choudhury","doi":"10.1145/3706598.3713481","DOIUrl":"10.1145/3706598.3713481","url":null,"abstract":"<p><p>Health information technologies are transforming how mental healthcare is paid for through value-based care programs, which tie payment to data quantifying care outcomes. But, it is unclear what outcomes data these technologies should store, how to engage users in data collection, and how outcomes data can improve care. Given these challenges, we conducted interviews with 30 U.S.-based mental health clinicians to explore the design space of health information technologies that support outcomes data specification, collection, and use in value-based mental healthcare. Our findings center clinicians' perspectives on aligning outcomes data for payment programs and care; opportunities for health technologies and personal devices to improve data collection; and considerations for using outcomes data to hold stakeholders including clinicians, health insurers, and social services financially accountable in value-based mental healthcare. We conclude with implications for future research designing and developing technologies supporting value-based care across stakeholders involved with mental health service delivery.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12218218/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Visual Perception: Insights from Smartphone Interaction of Visually Impaired Users with Large Multimodal Models. 超越视觉感知:视觉受损用户与大型多模态模型的智能手机交互的见解。
Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3714210
Jingyi Xie, Rui Yu, H E Zhang, Syed Masum Billah, Sooyeon Lee, John M Carroll

Large multimodal models (LMMs) have enabled new AI-powered applications that help people with visual impairments (PVI) receive natural language descriptions of their surroundings through audible text. We investigated how this emerging paradigm of visual assistance transforms how PVI perform and manage their daily tasks. Moving beyond basic usability assessments, we examined both the capabilities and limitations of LMM-based tools in personal and social contexts, while exploring design implications for their future development. Through interviews with 14 visually impaired users and analysis of image descriptions from both participants and social media using Be My AI (an LMM-based application), we identified two key limitations. First, these systems' context awareness suffers from hallucinations and misinterpretations of social contexts, styles, and human identities. Second, their intent-oriented capabilities often fail to grasp and act on users' intentions. Based on these findings, we propose design strategies for improving both human-AI and AI-AI interactions, contributing to the development of more effective, interactive, and personalized assistive technologies.

大型多模态模型(lmm)使新的人工智能应用程序能够帮助视障人士(PVI)通过可听文本接收对周围环境的自然语言描述。我们研究了这种新兴的视觉辅助模式如何改变PVI执行和管理日常任务的方式。除了基本的可用性评估之外,我们还研究了基于lmm的工具在个人和社会环境中的能力和局限性,同时探索了它们未来发展的设计含义。通过对14名视障用户的访谈,以及使用“Be My AI”(一款基于lm的应用程序)对参与者和社交媒体的图像描述进行分析,我们发现了两个关键的限制。首先,这些系统的情境意识会对社会情境、风格和人类身份产生幻觉和误解。其次,他们的面向意图的能力往往不能把握和行动用户的意图。基于这些发现,我们提出了改善人机交互和人工智能交互的设计策略,有助于开发更有效、更互动、更个性化的辅助技术。
{"title":"Beyond Visual Perception: Insights from Smartphone Interaction of Visually Impaired Users with Large Multimodal Models.","authors":"Jingyi Xie, Rui Yu, H E Zhang, Syed Masum Billah, Sooyeon Lee, John M Carroll","doi":"10.1145/3706598.3714210","DOIUrl":"10.1145/3706598.3714210","url":null,"abstract":"<p><p>Large multimodal models (LMMs) have enabled new AI-powered applications that help people with visual impairments (PVI) receive natural language descriptions of their surroundings through audible text. We investigated how this emerging paradigm of visual assistance transforms how PVI perform and manage their daily tasks. Moving beyond basic usability assessments, we examined both the capabilities and limitations of LMM-based tools in personal and social contexts, while exploring design implications for their future development. Through interviews with 14 visually impaired users and analysis of image descriptions from both participants and social media using Be My AI (an LMM-based application), we identified two key limitations. First, these systems' context awareness suffers from hallucinations and misinterpretations of social contexts, styles, and human identities. Second, their intent-oriented capabilities often fail to grasp and act on users' intentions. Based on these findings, we propose design strategies for improving both human-AI and AI-AI interactions, contributing to the development of more effective, interactive, and personalized assistive technologies.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"25 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12338113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VideoA11y: Method and Dataset for Accessible Video Description. VideoA11y:可访问视频描述的方法和数据集。
Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3714096
Chaoyu Li, Sid Padmanabhuni, Maryam S Cheema, Hasti Seifi, Pooyan Fazli

Video descriptions are crucial for blind and low vision (BLV) users to access visual content. However, current artificial intelligence models for generating descriptions often fall short due to limitations in the quality of human annotations within training datasets, resulting in descriptions that do not fully meet BLV users' needs. To address this gap, we introduce VideoA11y, an approach that leverages multimodal large language models (MLLMs) and video accessibility guidelines to generate descriptions tailored for BLV individuals. Using this method, we have curated VideoA11y-40K, the largest and most comprehensive dataset of 40,000 videos described for BLV users. Rigorous experiments across 15 video categories, involving 347 sighted participants, 40 BLV participants, and seven professional describers, showed that VideoA11y descriptions outperform novice human annotations and are comparable to trained human annotations in clarity, accuracy, objectivity, descriptiveness, and user satisfaction. We evaluated models on VideoA11y-40K using both standard and custom metrics, demonstrating that MLLMs fine-tuned on this dataset produce high-quality accessible descriptions. Code and dataset are available at https://people-robots.github.io/VideoA11y/.

视频描述对于盲人和低视力(BLV)用户访问视觉内容至关重要。然而,目前用于生成描述的人工智能模型往往由于训练数据集中人类注释的质量限制而不足,导致描述不能完全满足BLV用户的需求。为了解决这一差距,我们引入了VideoA11y,这是一种利用多模态大语言模型(mllm)和视频可访问性指南来生成针对BLV个人量身定制的描述的方法。使用这种方法,我们策划了VideoA11y-40K,这是为BLV用户描述的最大和最全面的40,000个视频数据集。对15个视频类别进行了严格的实验,涉及347名视力正常的参与者、40名BLV参与者和7名专业描述者,结果表明,VideoA11y描述在清晰度、准确性、客观性、描述性和用户满意度方面优于新手人工注释,与经过训练的人工注释相当。我们使用标准和自定义指标对VideoA11y-40K上的模型进行了评估,证明在该数据集上进行微调的mlms产生了高质量的可访问描述。代码和数据集可从https://people-robots.github.io/VideoA11y/获得。
{"title":"VideoA11y: Method and Dataset for Accessible Video Description.","authors":"Chaoyu Li, Sid Padmanabhuni, Maryam S Cheema, Hasti Seifi, Pooyan Fazli","doi":"10.1145/3706598.3714096","DOIUrl":"10.1145/3706598.3714096","url":null,"abstract":"<p><p>Video descriptions are crucial for blind and low vision (BLV) users to access visual content. However, current artificial intelligence models for generating descriptions often fall short due to limitations in the quality of human annotations within training datasets, resulting in descriptions that do not fully meet BLV users' needs. To address this gap, we introduce VideoA11y, an approach that leverages multimodal large language models (MLLMs) and video accessibility guidelines to generate descriptions tailored for BLV individuals. Using this method, we have curated VideoA11y-40K, the largest and most comprehensive dataset of 40,000 videos described for BLV users. Rigorous experiments across 15 video categories, involving 347 sighted participants, 40 BLV participants, and seven professional describers, showed that VideoA11y descriptions outperform novice human annotations and are comparable to trained human annotations in clarity, accuracy, objectivity, descriptiveness, and user satisfaction. We evaluated models on VideoA11y-40K using both standard and custom metrics, demonstrating that MLLMs fine-tuned on this dataset produce high-quality accessible descriptions. Code and dataset are available at https://people-robots.github.io/VideoA11y/.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tap&Say: Touch Location-Informed Large Language Model for Multimodal Text Correction on Smartphones. Tap&Say:用于智能手机多模态文本校正的触摸位置通知大型语言模型。
Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3713376
Maozheng Zhao, Shanqing Cai, Shumin Zhai, Michael Xuelin Huang, Henry Huang, I V Ramakrishnan, Nathan G Huang, Michael G Huang, Xiaojun Bi

While voice input offers a convenient alternative to traditional text editing on mobile devices, practical implementations face two key challenges: 1) reliably distinguishing between editing commands and content dictation, and 2) effortlessly pinpointing the intended edit location. We propose Tap&Say, a novel multimodal system that combines touch interactions with Large Language Models (LLMs) for accurate text correction. By tapping near an error, users signal their edit intent and location, addressing both challenges. Then, the user speaks the correction text. Tap&Say utilizes the touch location, voice input, and existing text to generate contextually relevant correction suggestions. We propose a novel touch location-informed attention layer that integrates the tap location into the LLM's attention mechanism, enabling it to utilize the tap location for text correction. We fine-tuned the touch location-informed LLM on synthetic touch locations and correction commands, achieving significantly higher correction accuracy than the state-of-the-art method VT [45]. A 16-person user study demonstrated that Tap&Say outperforms VT [45] with 16.4% shorter task completion time and 47.5% fewer keyboard clicks and is preferred by users.

虽然语音输入为移动设备上的传统文本编辑提供了一个方便的选择,但实际实现面临两个关键挑战:1)可靠地区分编辑命令和内容听写,2)毫不费力地确定预期的编辑位置。我们提出Tap&Say,这是一种新颖的多模态系统,将触摸交互与大型语言模型(llm)相结合,用于精确的文本校正。通过在错误信息附近点击,用户可以表明他们的编辑意图和位置,从而解决这两个问题。然后,用户说出正确的文本。Tap&Say利用触摸位置、语音输入和现有文本来生成与上下文相关的纠正建议。我们提出了一种新颖的触摸位置通知注意层,该层将点击位置集成到LLM的注意机制中,使其能够利用点击位置进行文本更正。我们根据合成的触摸位置和校正命令对触摸位置信息LLM进行了微调,获得了比最先进的方法VT[45]更高的校正精度。一项16人的用户研究表明,Tap&Say比VT[45]完成任务的时间缩短了16.4%,键盘点击次数减少了47.5%,是用户的首选。
{"title":"Tap&Say: Touch Location-Informed Large Language Model for Multimodal Text Correction on Smartphones.","authors":"Maozheng Zhao, Shanqing Cai, Shumin Zhai, Michael Xuelin Huang, Henry Huang, I V Ramakrishnan, Nathan G Huang, Michael G Huang, Xiaojun Bi","doi":"10.1145/3706598.3713376","DOIUrl":"10.1145/3706598.3713376","url":null,"abstract":"<p><p>While voice input offers a convenient alternative to traditional text editing on mobile devices, practical implementations face two key challenges: 1) reliably distinguishing between editing commands and content dictation, and 2) effortlessly pinpointing the intended edit location. We propose Tap&Say, a novel multimodal system that combines touch interactions with Large Language Models (LLMs) for accurate text correction. By tapping near an error, users signal their edit intent and location, addressing both challenges. Then, the user speaks the correction text. Tap&Say utilizes the touch location, voice input, and existing text to generate contextually relevant correction suggestions. We propose a novel <i>touch location-informed attention</i> layer that integrates the tap location into the LLM's attention mechanism, enabling it to utilize the tap location for text correction. We fine-tuned the touch location-informed LLM on synthetic touch locations and correction commands, achieving significantly higher correction accuracy than the state-of-the-art method VT [45]. A 16-person user study demonstrated that Tap&Say outperforms VT [45] with 16.4% shorter task completion time and 47.5% fewer keyboard clicks and is preferred by users.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12723524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145829274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What's In Your Kit? Mental Health Technology Kits for Depression Self-Management. 你的工具箱里有什么?抑郁症自我管理的心理健康技术工具包。
Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3713585
Eleanor R Burgess, David C Mohr, Sean A Munson, Madhu C Reddy

This paper characterizes the mental health technology "kits" of individuals managing depression: the specific technologies on their digital devices and physical items in their environments that people turn to as part of their mental health management. We interviewed 28 individuals living across the United States who use bundles of connected tools for both individual and collaborative mental health activities. We contribute to the HCI community by conceptualizing these tool assemblages that people managing depression have constructed over time. We detail categories of tools, describe kit characteristics (intentional, adaptable, available), and present participant ideas for future mental health support technologies. We then discuss what a mental health technology kit perspective means for researchers and designers and describe design principles (building within current toolkits; creating new tools from current self-management strategies; and identifying gaps in people's current kits) to support depression self-management across an evolving set of tools.

本文描述了个人管理抑郁症的心理健康技术“工具包”:他们的数字设备和环境中的物理物品上的特定技术,人们将其作为心理健康管理的一部分。我们采访了生活在美国各地的28个人,他们使用捆绑的连接工具进行个人和协作的心理健康活动。我们通过概念化这些人们管理抑郁症的工具组合,为HCI社区做出了贡献。我们详细介绍了工具的类别,描述了工具包的特征(有意的,适应性的,可用的),并提出了参与者对未来心理健康支持技术的想法。然后,我们讨论了心理健康技术工具包视角对研究人员和设计师的意义,并描述了设计原则(在当前工具包中构建;从目前的自我管理策略中创造新的工具;并找出人们目前工具包中的差距),通过一系列不断发展的工具来支持抑郁症的自我管理。
{"title":"What's In Your Kit? Mental Health Technology Kits for Depression Self-Management.","authors":"Eleanor R Burgess, David C Mohr, Sean A Munson, Madhu C Reddy","doi":"10.1145/3706598.3713585","DOIUrl":"10.1145/3706598.3713585","url":null,"abstract":"<p><p>This paper characterizes the mental health technology \"kits\" of individuals managing depression: the specific technologies on their digital devices and physical items in their environments that people turn to as part of their mental health management. We interviewed 28 individuals living across the United States who use bundles of connected tools for both individual and collaborative mental health activities. We contribute to the HCI community by conceptualizing these tool assemblages that people managing depression have constructed over time. We detail categories of tools, describe kit characteristics (intentional, adaptable, available), and present participant ideas for future mental health support technologies. We then discuss what a mental health technology kit perspective means for researchers and designers and describe design principles (building within current toolkits; creating new tools from current self-management strategies; and identifying gaps in people's current kits) to support depression self-management across an evolving set of tools.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12118807/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Micro-narratives: A Scalable Method for Eliciting Stories of People's Lived Experience. 微叙事:引出人们生活经历故事的可扩展方法。
Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3713999
Amira Skeggs, Ashish Mehta, Valerie Yap, Seray B Ibrahim, Charla Rhodes, James J Gross, Sean A Munson, Predrag Klasnja, Amy Orben, Petr Slovak

Engaging with people's lived experiences is foundational for HCI research and design. This paper introduces a novel narrative elicitation method to empower people to easily articulate 'micro-narratives' emerging from their lived experiences, irrespective of their writing ability or background. Our approach aims to enable at-scale collection of rich, co-created datasets that highlight target populations' voices with minimal participant burden, while precisely addressing specific research questions. To pilot this idea, and test its feasibility, we: (i) developed an AI-powered prototype, which leverages LLM-chaining to scaffold the cognitive steps necessary for users' narrative articulation; (ii) deployed it in three mixed-methods studies involving over 380 users; and (iii) consulted with established academics as well as C-level staff at (inter)national non-profits to map out potential applications. Both qualitative and quantitative findings show the acceptability and promise of the micro-narrative method, while also identifying the ethical and safeguarding considerations necessary for any at-scale deployments.

参与人们的生活体验是HCI研究和设计的基础。本文介绍了一种新颖的叙事启发方法,使人们能够轻松地从他们的生活经历中表达“微叙事”,而不考虑他们的写作能力或背景。我们的方法旨在实现大规模收集丰富的、共同创建的数据集,以最小的参与者负担突出目标人群的声音,同时精确地解决特定的研究问题。为了试验这一想法并测试其可行性,我们:(i)开发了一个ai驱动的原型,它利用llm链来支撑用户叙事清晰所必需的认知步骤;(ii)在涉及超过380名使用者的三项混合方法研究中部署该系统;(iii)与知名学者和(国际)国内非营利组织的c级员工协商,以规划潜在的应用。定性和定量调查结果都表明微观叙述方法的可接受性和前景,同时也确定了任何大规模部署所需的道德和保障考虑。
{"title":"Micro-narratives: A Scalable Method for Eliciting Stories of People's Lived Experience.","authors":"Amira Skeggs, Ashish Mehta, Valerie Yap, Seray B Ibrahim, Charla Rhodes, James J Gross, Sean A Munson, Predrag Klasnja, Amy Orben, Petr Slovak","doi":"10.1145/3706598.3713999","DOIUrl":"10.1145/3706598.3713999","url":null,"abstract":"<p><p>Engaging with people's lived experiences is foundational for HCI research and design. This paper introduces a novel narrative elicitation method to empower people to easily articulate 'micro-narratives' emerging from their lived experiences, irrespective of their writing ability or background. Our approach aims to enable at-scale collection of rich, co-created datasets that highlight target populations' voices with minimal participant burden, while precisely addressing specific research questions. To pilot this idea, and test its feasibility, we: (i) developed an AI-powered prototype, which leverages LLM-chaining to scaffold the cognitive steps necessary for users' narrative articulation; (ii) deployed it in three mixed-methods studies involving over 380 users; and (iii) consulted with established academics as well as C-level staff at (inter)national non-profits to map out potential applications. Both qualitative and quantitative findings show the acceptability and promise of the micro-narrative method, while also identifying the ethical and safeguarding considerations necessary for any at-scale deployments.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12265993/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VisiMark: Characterizing and Augmenting Landmarks for People with Low Vision in Augmented Reality to Support Indoor Navigation. VisiMark:在增强现实中为低视力人群描述和增强地标,以支持室内导航。
Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3713847
Ruijia Chen, Junru Jiang, Pragati Maheshwary, Brianna R Cochran, Yuhang Zhao

Landmarks are critical in navigation, supporting self-orientation and mental model development. Similar to sighted people, people with low vision (PLV) frequently look for landmarks via visual cues but face difficulties identifying some important landmarks due to vision loss. We first conducted a formative study with six PLV to characterize their challenges and strategies in landmark selection, identifying their unique landmark categories (e.g., area silhouettes, accessibility-related objects) and preferred landmark augmentations. We then designed VisiMark, an AR interface that supports landmark perception for PLV by providing both overviews of space structures and in-situ landmark augmentations. We evaluated VisiMark with 16 PLV and found that VisiMark enabled PLV to perceive landmarks they preferred but could not easily perceive before, and changed PLV's landmark selection from only visually-salient objects to cognitive landmarks that are more important and meaningful. We further derive design considerations for AR-based landmark augmentation systems for PLV.

路标在导航中是至关重要的,它支持自我定位和心智模型的发展。与视力正常的人相似,低视力的人经常通过视觉线索寻找地标,但由于视力丧失,他们在识别一些重要地标方面面临困难。我们首先对六个PLV进行了形成性研究,以表征他们在地标选择方面的挑战和策略,确定他们独特的地标类别(例如,区域轮廓,与可达性相关的物体)和首选的地标增强。然后,我们设计了VisiMark,这是一个AR界面,通过提供空间结构的概述和原位地标增强来支持PLV的地标感知。我们用16个PLV对VisiMark进行了评估,发现VisiMark使PLV能够感知到他们喜欢但以前不容易感知的地标,并将PLV的地标选择从仅仅视觉上显著的物体转变为更重要和有意义的认知地标。我们进一步推导了基于ar的PLV地标增强系统的设计考虑因素。
{"title":"VisiMark: Characterizing and Augmenting Landmarks for People with Low Vision in Augmented Reality to Support Indoor Navigation.","authors":"Ruijia Chen, Junru Jiang, Pragati Maheshwary, Brianna R Cochran, Yuhang Zhao","doi":"10.1145/3706598.3713847","DOIUrl":"10.1145/3706598.3713847","url":null,"abstract":"<p><p>Landmarks are critical in navigation, supporting self-orientation and mental model development. Similar to sighted people, people with low vision (PLV) frequently look for landmarks via visual cues but face difficulties identifying some important landmarks due to vision loss. We first conducted a formative study with six PLV to characterize their challenges and strategies in landmark selection, identifying their unique landmark categories (e.g., area silhouettes, accessibility-related objects) and preferred landmark augmentations. We then designed <i>VisiMark</i>, an AR interface that supports landmark perception for PLV by providing both overviews of space structures and in-situ landmark augmentations. We evaluated VisiMark with 16 PLV and found that VisiMark enabled PLV to perceive landmarks they preferred but could not easily perceive before, and changed PLV's landmark selection from only visually-salient objects to cognitive landmarks that are more important and meaningful. We further derive design considerations for AR-based landmark augmentation systems for PLV.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12269830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Voice of Endo: Leveraging Speech for an Intelligent System That Can Forecast Illness Flare-ups. 远藤之声:利用语音为智能系统预测疾病突发。
Pub Date : 2025-04-01 Epub Date: 2025-04-25 DOI: 10.1145/3706598.3714040
Adrienne Pichon, Jessica R Blumberg, Lena Mamykina, Noémie Elhadad

Managing complex chronic illness is challenging due to its unpredictability. This paper explores the potential of voice for automated flare-up forecasts. We conducted a six-week speculative design study with individuals with endometriosis, tasking participants to submit daily voice recordings and symptom logs. Through focus groups, we elicited their experiences with voice capture and perceptions of its usefulness in forecasting flare-ups. Participants were enthusiastic and intrigued at the potential of flare-up forecasts through the analysis of their voice. They highlighted imagined benefits from the experience of recording in supporting emotional aspects of illness and validating both day-to-day and overall illness experiences. Participants reported that their recordings revolved around their endometriosis, suggesting that the recordings' content could further inform forecasting. We discuss potential opportunities and challenges in leveraging the voice as a data modality in human-centered AI tools that support individuals with complex chronic conditions.

由于其不可预测性,管理复杂的慢性疾病具有挑战性。本文探讨了语音在自动发作预测中的潜力。我们对子宫内膜异位症患者进行了为期六周的推测性设计研究,要求参与者提交每日录音和症状日志。通过焦点小组,我们获得了他们在声音捕捉方面的经验,以及对声音捕捉在预测突发事件方面有用性的认识。参与者对通过分析他们的声音来预测爆发的潜力充满了热情和兴趣。他们强调了从支持疾病的情感方面和验证日常和整体疾病经历的记录经验中想象的好处。参与者报告说,他们的录音围绕着他们的子宫内膜异位症,这表明录音的内容可以进一步为预测提供信息。我们讨论了在以人为中心的人工智能工具中利用语音作为数据模式的潜在机遇和挑战,这些工具支持患有复杂慢性病的个人。
{"title":"The Voice of Endo: Leveraging Speech for an Intelligent System That Can Forecast Illness Flare-ups.","authors":"Adrienne Pichon, Jessica R Blumberg, Lena Mamykina, Noémie Elhadad","doi":"10.1145/3706598.3714040","DOIUrl":"10.1145/3706598.3714040","url":null,"abstract":"<p><p>Managing complex chronic illness is challenging due to its unpredictability. This paper explores the potential of voice for automated flare-up forecasts. We conducted a six-week speculative design study with individuals with endometriosis, tasking participants to submit daily voice recordings and symptom logs. Through focus groups, we elicited their experiences with voice capture and perceptions of its usefulness in forecasting flare-ups. Participants were enthusiastic and intrigued at the potential of flare-up forecasts through the analysis of their voice. They highlighted imagined benefits from the experience of recording in supporting emotional aspects of illness and validating both day-to-day and overall illness experiences. Participants reported that their recordings revolved around their endometriosis, suggesting that the recordings' content could further inform forecasting. We discuss potential opportunities and challenges in leveraging the voice as a data modality in human-centered AI tools that support individuals with complex chronic conditions.</p>","PeriodicalId":74552,"journal":{"name":"Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439622/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the SIGCHI conference on human factors in computing systems. CHI Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1