It's the Gesture That (re)Counts: Annotating While Running to Recall Affective Experience

Felwah Alqahtani, Derek F. Reilly
{"title":"It's the Gesture That (re)Counts: Annotating While Running to Recall Affective Experience","authors":"Felwah Alqahtani, Derek F. Reilly","doi":"10.20380/GI2018.12","DOIUrl":null,"url":null,"abstract":"We present results from a study exploring whether gestural annotations of felt emotion presented on a map-based visualization can support recall of affective experience during recreational runs. We compare gestural annotations with audio and video notes and a “mental note” baseline. In our study, 20 runners were asked to record their emotional state at regular intervals while running a familiar route. Each runner used one of the four methods to capture emotion over four separate runs. Five days after the last run, runners used an interactive map-based visualization to review and recall their running experiences. Results indicate that gestural annotation promoted recall of affective experience more effectively than the baseline condition, as measured by confidence in recall and detail provided. Gestural annotation was also comparable to video and audio annotation in terms of recollection confidence and detail. Audio annotation supported recall primarily through the runner's spoken annotation, but sound in the background was sometimes used. Video annotation yielded the most detail, much directly related to visual cues in the video, however using video annotations required runners to stop during their runs. Given these results we propose that background logging of ambient sounds and video may supplement gestural annotation.","PeriodicalId":230994,"journal":{"name":"Proceedings of the 44th Graphics Interface Conference","volume":"118 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 44th Graphics Interface Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20380/GI2018.12","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We present results from a study exploring whether gestural annotations of felt emotion presented on a map-based visualization can support recall of affective experience during recreational runs. We compare gestural annotations with audio and video notes and a “mental note” baseline. In our study, 20 runners were asked to record their emotional state at regular intervals while running a familiar route. Each runner used one of the four methods to capture emotion over four separate runs. Five days after the last run, runners used an interactive map-based visualization to review and recall their running experiences. Results indicate that gestural annotation promoted recall of affective experience more effectively than the baseline condition, as measured by confidence in recall and detail provided. Gestural annotation was also comparable to video and audio annotation in terms of recollection confidence and detail. Audio annotation supported recall primarily through the runner's spoken annotation, but sound in the background was sometimes used. Video annotation yielded the most detail, much directly related to visual cues in the video, however using video annotations required runners to stop during their runs. Given these results we propose that background logging of ambient sounds and video may supplement gestural annotation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
重要的是手势:在跑步时做注释以回忆情感体验
我们提出了一项研究的结果,该研究探讨了在基于地图的可视化中呈现的感觉情绪的手势注释是否可以支持休闲跑步时情感体验的回忆。我们将手势注释与音频和视频笔记以及“心理笔记”基线进行比较。在我们的研究中,20名跑步者被要求在跑步熟悉的路线时定期记录他们的情绪状态。每个跑步者都使用四种方法中的一种来捕捉四次跑步时的情绪。在最后一次跑步五天后,跑步者使用基于交互式地图的可视化来回顾和回忆他们的跑步经历。结果表明,手势注释比基线条件更有效地促进了情感经验的回忆,这是通过对回忆和细节提供的信心来衡量的。手势注释与视频和音频注释在回忆可信度和细节方面也具有可比性。音频注释主要通过跑步者的语音注释来支持记忆,但有时也会使用背景声音。视频注释产生了最多的细节,与视频中的视觉线索直接相关,然而使用视频注释需要跑步者在跑步过程中停下来。鉴于这些结果,我们建议环境声音和视频的背景记录可以补充手势注释。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Couch: Investigating the Relationship between Aesthetics and Persuasion in a Mobile Application PadCorrect: Correcting User Input on a Virtual Gamepad RepulsionPak: Deformation-Driven Element Packing with Repulsion Forces Viewpoint Snapping to Reduce Cybersickness in Virtual Reality gMotion: A Spatio-Temporal Grammar for the Procedural Generation of Motion Graphics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1