观看多视点视频对协同体验元认知的影响

Y. Sumi, M. Suwa, Koichi Hanaue
{"title":"观看多视点视频对协同体验元认知的影响","authors":"Y. Sumi, M. Suwa, Koichi Hanaue","doi":"10.1145/3173574.3174222","DOIUrl":null,"url":null,"abstract":"This paper discusses the effects of multiple viewpoint videos for metacognition of experiences. We present a system for recording multiple users' collaborative experiences by wearable and environmental sensors, and another system for viewing multiple viewpoint videos automatically identified and extracted to associate to individual users. We designed an experiment to compare the metacognition of one's own experience between those based on memory and those supported by video viewing. The experimental results show that metacognitive descriptions related to one's own mind, such as feelings and preferences, are possible regardless whether a person is viewing videos, but such episodic descriptions as the content of someone's utterance and what s/he felt associated with it are strongly promoted by video viewing. We conducted another experiment where the same participants did identical metacognitive description tasks about half a year after the previous experiment. Through the experiments, we found the first-person view video is mostly used for confirming the episodic facts immediately after the experience, whereas after half a year, even one's own experience is often felt like the experiences of others therefore the videos capturing themselves from the conversation partners and environment become important for thinking back to the situations where they were placed.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Effects of Viewing Multiple Viewpoint Videos on Metacognition of Collaborative Experiences\",\"authors\":\"Y. Sumi, M. Suwa, Koichi Hanaue\",\"doi\":\"10.1145/3173574.3174222\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper discusses the effects of multiple viewpoint videos for metacognition of experiences. We present a system for recording multiple users' collaborative experiences by wearable and environmental sensors, and another system for viewing multiple viewpoint videos automatically identified and extracted to associate to individual users. We designed an experiment to compare the metacognition of one's own experience between those based on memory and those supported by video viewing. The experimental results show that metacognitive descriptions related to one's own mind, such as feelings and preferences, are possible regardless whether a person is viewing videos, but such episodic descriptions as the content of someone's utterance and what s/he felt associated with it are strongly promoted by video viewing. We conducted another experiment where the same participants did identical metacognitive description tasks about half a year after the previous experiment. Through the experiments, we found the first-person view video is mostly used for confirming the episodic facts immediately after the experience, whereas after half a year, even one's own experience is often felt like the experiences of others therefore the videos capturing themselves from the conversation partners and environment become important for thinking back to the situations where they were placed.\",\"PeriodicalId\":20512,\"journal\":{\"name\":\"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-04-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3173574.3174222\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3173574.3174222","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

本文探讨了多视点视频对体验元认知的影响。我们提出了一个通过可穿戴和环境传感器记录多个用户协作体验的系统,以及另一个用于观看自动识别和提取的多个视点视频以关联到单个用户的系统。我们设计了一个实验来比较基于记忆和视频观看支持的人对自己经历的元认知。实验结果表明,与一个人是否观看视频有关的元认知描述,如感觉和偏好,都是可能的,但像某人的话语内容和与其相关的感觉这样的情景描述,则被视频观看强烈地促进了。我们进行了另一个实验,在前一个实验的半年后,同样的参与者做了相同的元认知描述任务。通过实验,我们发现第一人称视角视频主要用于在经历后立即确认情景事实,而半年后,即使是自己的经历也经常感觉像别人的经历,因此从对话伙伴和环境中捕捉自己的视频对于回想他们所处的情境变得重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Effects of Viewing Multiple Viewpoint Videos on Metacognition of Collaborative Experiences
This paper discusses the effects of multiple viewpoint videos for metacognition of experiences. We present a system for recording multiple users' collaborative experiences by wearable and environmental sensors, and another system for viewing multiple viewpoint videos automatically identified and extracted to associate to individual users. We designed an experiment to compare the metacognition of one's own experience between those based on memory and those supported by video viewing. The experimental results show that metacognitive descriptions related to one's own mind, such as feelings and preferences, are possible regardless whether a person is viewing videos, but such episodic descriptions as the content of someone's utterance and what s/he felt associated with it are strongly promoted by video viewing. We conducted another experiment where the same participants did identical metacognitive description tasks about half a year after the previous experiment. Through the experiments, we found the first-person view video is mostly used for confirming the episodic facts immediately after the experience, whereas after half a year, even one's own experience is often felt like the experiences of others therefore the videos capturing themselves from the conversation partners and environment become important for thinking back to the situations where they were placed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Scaling Classroom IT Skill Tutoring: A Case Study from India Convey: Exploring the Use of a Context View for Chatbots Make Yourself at Phone: Reimagining Mobile Interaction Architectures With Emergent Users Forte Conveying the Perception of Kinesthetic Feedback in Virtual Reality using State-of-the-Art Hardware
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1