An infrared-based depth camera for gesture-based control of virtual environments

D. Ionescu, V. Suse, C. Gadea, B. Solomon, B. Ionescu, S. Islam
{"title":"An infrared-based depth camera for gesture-based control of virtual environments","authors":"D. Ionescu, V. Suse, C. Gadea, B. Solomon, B. Ionescu, S. Islam","doi":"10.1109/CIVEMSA.2013.6617388","DOIUrl":null,"url":null,"abstract":"Gesture Control dominates presently the research on new human computer interfaces. The domain covers both the sensors to capture gestures and also the driver software which interprets the gesture mapping it onto a robust command. More recently, there is a trend to use depth-mapping camera as the 2D cameras fall short in assuring the conditions of real-time robustness of the whole system. As image processing is at the core of the detection, recognition, and tracking the gesture, depth mapping sensors have to provide a depth image insensitive to illumination conditions. Thus depth-mapping cameras work in a certain wavelength of the infrared (IR) spectrum. In this paper, a novel real-time depth-mapping principle for an IR camera is introduced. The new IR camera architecture comprises an illuminator module which is pulse-modulated via a monotonic function using a cycle driven feedback loop for the control of laser intensity, while the reflected infrared light is captured in “slices” of the space in which the object of interest is situated. A reconfigurable hardware architecture unit calculates the depth slices and combines them in a depth-map of the object to be further used in the detection, tracking, and recognition of the gesture made by the user. Images of real objects are reconstructed in 3D based on the data obtained by the space-slicing technique, and a corresponding image processing algorithm builds the 3D map of the object in real-time. As this paper will show through a series of experiments, the camera can be used in a variety of domains, including for gesture control of 3D objects in virtual environments.","PeriodicalId":159100,"journal":{"name":"2013 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIVEMSA.2013.6617388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Gesture Control dominates presently the research on new human computer interfaces. The domain covers both the sensors to capture gestures and also the driver software which interprets the gesture mapping it onto a robust command. More recently, there is a trend to use depth-mapping camera as the 2D cameras fall short in assuring the conditions of real-time robustness of the whole system. As image processing is at the core of the detection, recognition, and tracking the gesture, depth mapping sensors have to provide a depth image insensitive to illumination conditions. Thus depth-mapping cameras work in a certain wavelength of the infrared (IR) spectrum. In this paper, a novel real-time depth-mapping principle for an IR camera is introduced. The new IR camera architecture comprises an illuminator module which is pulse-modulated via a monotonic function using a cycle driven feedback loop for the control of laser intensity, while the reflected infrared light is captured in “slices” of the space in which the object of interest is situated. A reconfigurable hardware architecture unit calculates the depth slices and combines them in a depth-map of the object to be further used in the detection, tracking, and recognition of the gesture made by the user. Images of real objects are reconstructed in 3D based on the data obtained by the space-slicing technique, and a corresponding image processing algorithm builds the 3D map of the object in real-time. As this paper will show through a series of experiments, the camera can be used in a variety of domains, including for gesture control of 3D objects in virtual environments.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种用于虚拟环境手势控制的红外深度相机
手势控制是目前新型人机界面研究的主流。该领域既包括捕捉手势的传感器,也包括解释手势并将其映射到鲁棒命令的驱动软件。最近,由于2D摄像机在保证整个系统的实时鲁棒性条件方面存在不足,因此有使用深度映射相机的趋势。由于图像处理是手势检测、识别和跟踪的核心,因此深度映射传感器必须提供对光照条件不敏感的深度图像。因此,深度映射相机在特定波长的红外光谱中工作。本文介绍了一种新的红外相机实时深度映射原理。新的红外相机架构包括一个照明模块,该模块通过一个单调函数进行脉冲调制,使用周期驱动的反馈回路来控制激光强度,而反射的红外光被捕获在感兴趣的物体所在空间的“切片”中。可重新配置的硬件架构单元计算深度切片,并将它们组合在对象的深度图中,以进一步用于检测、跟踪和识别用户所做的手势。利用空间切片技术获取的数据对真实物体进行三维图像重构,并通过相应的图像处理算法实时构建物体的三维地图。正如本文将通过一系列实验展示的那样,该相机可以用于各种领域,包括虚拟环境中3D物体的手势控制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Intelligent SVM based food intake measurement system An ANN based system for forecasting ship roll motion Computational Intelligence based construction of a Body Condition Assessment system for cattle Facial expression cloning with fuzzy set clustering The impact of motion in virtual environments on memorization performance
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1