视觉体验数据集:超过 200 个小时的综合眼球运动、里程测量和自我中心视频记录。

IF 2 4区 心理学 Q2 OPHTHALMOLOGY Journal of Vision Pub Date : 2024-10-03 DOI:10.1167/jov.24.11.6
Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann
{"title":"视觉体验数据集:超过 200 个小时的综合眼球运动、里程测量和自我中心视频记录。","authors":"Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann","doi":"10.1167/jov.24.11.6","DOIUrl":null,"url":null,"abstract":"<p><p>We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 11","pages":"6"},"PeriodicalIF":2.0000,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466363/pdf/","citationCount":"0","resultStr":"{\"title\":\"The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video.\",\"authors\":\"Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann\",\"doi\":\"10.1167/jov.24.11.6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.</p>\",\"PeriodicalId\":49955,\"journal\":{\"name\":\"Journal of Vision\",\"volume\":\"24 11\",\"pages\":\"6\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466363/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Vision\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1167/jov.24.11.6\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Vision","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/jov.24.11.6","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

我们介绍了视觉体验数据集(VEDB),该数据集汇集了 240 多个小时的自我中心视频,并结合了凝视和头部跟踪数据,为人类观察者体验视觉世界提供了前所未有的视角。该数据集由 56 名年龄从 7 岁到 46 岁的观察者记录的 717 个片段组成。本文概述了为确保样本的代表性而采取的数据收集、处理和标签协议,并讨论了数据集中潜在的误差或偏差来源。VEDB 的潜在应用领域非常广泛,包括改进凝视跟踪方法、评估时空图像统计以及完善用于场景和活动识别的深度神经网络。VEDB 可通过现有的开放科学平台访问,旨在成为一个活的数据集,并计划进行扩展和社区贡献。该数据集的发布注重伦理方面的考虑,如参与者的隐私和减少潜在的偏见。通过提供一个基于真实世界经验的数据集,并附带大量元数据和支持代码,作者邀请研究界使用 VEDB 并为其做出贡献,从而促进对自然环境中的视觉感知和行为有更丰富的了解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video.

We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Vision
Journal of Vision 医学-眼科学
CiteScore
2.90
自引率
5.60%
发文量
218
审稿时长
3-6 weeks
期刊介绍: Exploring all aspects of biological visual function, including spatial vision, perception, low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics.
期刊最新文献
Individual differences reveal similarities in serial dependence effects across perceptual tasks, but not to oculomotor tasks. Investigating the relationship between subjective perception and unconscious feature integration. Binocular integration of chromatic and luminance signals. Deep convolutional neural networks are sensitive to face configuration. How the window of visibility varies around polar angle.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1