Wearable devices provide rich quantitative data for self-reflection on physical activity. However, users often struggle to derive meaningful insights from these data, highlighting the need for enhanced support. To investigate whether Large Language Models (LLMs) can facilitate this process, we propose and evaluate a human-LLM collaborative reflective journaling paradigm. We developed PaceMind, an LLM-mediated journaling system that implements this paradigm based on a three-stage reflection framework. It can generate data-driven drafts and personalized questions to guide users in integrating exercise data with personal insights. A two-week within-subjects study () compared the LLM-mediated system with a template-based journaling baseline. The LLM-mediated design significantly improved the perceived effectiveness of reflection support and increased users’ intention to use the system. However, perceived ease of use did not improve significantly. Users appreciated the LLM’s scaffolding for easing data sense-making, but also reported added cognitive work in verifying and personalizing the LLM-generated content. Although objective activity levels did not change significantly, the LLM-mediated condition showed a trend toward more adaptive exercise planning and sustained engagement. Our findings provide empirical evidence for a human-LLM collaborative reflection paradigm in a data-intensive exercise context. They highlight both the potential to deepen user reflection and underscore the critical design challenge of balancing automation with meaningful cognitive engagement and user control.
扫码关注我们
求助内容:
应助结果提醒方式:
