Poster: A virtual body for augmented virtuality by chroma-keying of egocentric videos

Frank Steinicke, G. Bruder, K. Rothaus, K. Hinrichs
{"title":"Poster: A virtual body for augmented virtuality by chroma-keying of egocentric videos","authors":"Frank Steinicke, G. Bruder, K. Rothaus, K. Hinrichs","doi":"10.1109/3DUI.2009.4811218","DOIUrl":null,"url":null,"abstract":"A fully-articulated visual representation of oneself in an immersive virtual environment has considerable impact on the subjective sense of presence in the virtual world. Therefore, many approaches address this challenge and incorporate a virtual model of the user's body in the VE. Such a “virtual body” (VB) is manipulated according to user motions which are defined by feature points detected by a tracking system. The required tracking devices are unsuitable in scenarios which involve multiple persons simultaneously or in which participants frequently change. Furthermore, individual characteristics such as skin pigmentation, hairiness or clothes are not considered by this procedure. In this paper we present a software-based approach that allows to incorporate a realistic visual representation of oneself in the VE. The idea is to make use of images captured by cameras that are attached to video-see-through head-mounted displays. These egocentric frames can be segmented into foreground showing parts of the human body and background. Then the extremities can be overlayed with the user's current view of the virtual world, and thus a high-fidelity virtual body can be visualized.","PeriodicalId":125705,"journal":{"name":"2009 IEEE Symposium on 3D User Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Symposium on 3D User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DUI.2009.4811218","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

A fully-articulated visual representation of oneself in an immersive virtual environment has considerable impact on the subjective sense of presence in the virtual world. Therefore, many approaches address this challenge and incorporate a virtual model of the user's body in the VE. Such a “virtual body” (VB) is manipulated according to user motions which are defined by feature points detected by a tracking system. The required tracking devices are unsuitable in scenarios which involve multiple persons simultaneously or in which participants frequently change. Furthermore, individual characteristics such as skin pigmentation, hairiness or clothes are not considered by this procedure. In this paper we present a software-based approach that allows to incorporate a realistic visual representation of oneself in the VE. The idea is to make use of images captured by cameras that are attached to video-see-through head-mounted displays. These egocentric frames can be segmented into foreground showing parts of the human body and background. Then the extremities can be overlayed with the user's current view of the virtual world, and thus a high-fidelity virtual body can be visualized.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
海报:通过以自我为中心的视频的色彩键控来增强虚拟的虚拟身体
在沉浸式虚拟环境中,完全清晰的自我视觉表现对虚拟世界中的主观存在感有相当大的影响。因此,许多方法解决了这一挑战,并在VE中合并了用户身体的虚拟模型。这种“虚拟身体”(VB)是根据跟踪系统检测到的特征点定义的用户运动来操纵的。所要求的跟踪装置不适用于同时涉及多人或参与者频繁变化的情况。此外,个人特征,如皮肤色素沉着,毛发或衣服不考虑该程序。在本文中,我们提出了一种基于软件的方法,允许在VE中合并自己的逼真视觉表示。这个想法是利用连接在可视头戴式显示器上的摄像头捕捉到的图像。这些以自我为中心的框架可以被分割成显示人体部分的前景和背景。然后,四肢可以与用户当前的虚拟世界视图叠加,从而可以可视化高保真的虚拟身体。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Poster: RealDance: An exploration of 3D spatial interfaces for dancing games Tech-note: ScrutiCam: Camera manipulation technique for 3D objects inspection Arch-Explore: A natural user interface for immersive architectural walkthroughs Egocentric navigation for video surveillance in 3D Virtual Environments The influence of input device characteristics on spatial perception in desktop-based 3D applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1