首页 > 最新文献

2011 IEEE International Symposium on VR Innovation最新文献

英文 中文
SOM-based hand gesture recognition for virtual interactions 基于som的虚拟交互手势识别
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759659
Shuai Jin, Yi Li, Guangming Lu, Jian-xun Luo, Weidong Chen, Xiaoxiang Zheng
In nowadays, hand gestures can be used as a more natural and convenient way for human computer interaction. The direct interface of hand gestures provides us a new way for communicating with the virtual environment. In this paper, we propose a new hand gesture recognition method using self-organizing map (SOM) with datagloves. The SOM method is a type of machine learning algorithm. It deals with the raw data sampled from datagloves as input vectors, and builds a mapping between these uncalibrated data and gesture commands. The results show the average recognition rate and time efficiency when using SOM for dataglove-based hand gesture recognition. A series of tasks in virtual house illustrate the performance of our interaction method based on hand gesture recognition.
如今,手势作为一种更自然、更方便的人机交互方式。手势的直接界面为我们与虚拟环境的交流提供了一种新的方式。本文提出了一种基于数据集的自组织映射(SOM)的手势识别方法。SOM方法是一种机器学习算法。它处理从dataglove中采样的原始数据作为输入向量,并在这些未校准数据和手势命令之间建立映射。结果表明,SOM在基于数据集的手势识别中具有平均识别率和时间效率。虚拟房屋中的一系列任务验证了我们基于手势识别的交互方法的性能。
{"title":"SOM-based hand gesture recognition for virtual interactions","authors":"Shuai Jin, Yi Li, Guangming Lu, Jian-xun Luo, Weidong Chen, Xiaoxiang Zheng","doi":"10.1109/ISVRI.2011.5759659","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759659","url":null,"abstract":"In nowadays, hand gestures can be used as a more natural and convenient way for human computer interaction. The direct interface of hand gestures provides us a new way for communicating with the virtual environment. In this paper, we propose a new hand gesture recognition method using self-organizing map (SOM) with datagloves. The SOM method is a type of machine learning algorithm. It deals with the raw data sampled from datagloves as input vectors, and builds a mapping between these uncalibrated data and gesture commands. The results show the average recognition rate and time efficiency when using SOM for dataglove-based hand gesture recognition. A series of tasks in virtual house illustrate the performance of our interaction method based on hand gesture recognition.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130550214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
AR-Ghost Hunter: An augmented reality gun application AR-Ghost Hunter:一款增强现实枪支应用
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759617
Dongdong Weng, Xin Liu, Yongtian Wang, Yue Liu
This paper presents an augmented reality gun application named AR-Ghost Hunter. AR-Ghost Hunter is an extension of the traditional first person game which adopts an innovative infrared marker system and portable computer to form a complete mobile AR system. In this system, players are able to fight with virtual ghost through special gun like devices in real environment. The basic issue of the system such as infrared marker indentify, pose estimation, and user's devices are discussed.
本文介绍了AR-Ghost Hunter这一增强现实枪械应用。AR- ghost Hunter是传统第一人称游戏的延伸,采用创新的红外标记系统和便携式电脑组成完整的移动AR系统。在这个系统中,玩家可以在真实环境中通过特殊的类似枪支的装置与虚拟幽灵战斗。讨论了系统的基本问题,如红外标记识别、姿态估计和用户设备。
{"title":"AR-Ghost Hunter: An augmented reality gun application","authors":"Dongdong Weng, Xin Liu, Yongtian Wang, Yue Liu","doi":"10.1109/ISVRI.2011.5759617","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759617","url":null,"abstract":"This paper presents an augmented reality gun application named AR-Ghost Hunter. AR-Ghost Hunter is an extension of the traditional first person game which adopts an innovative infrared marker system and portable computer to form a complete mobile AR system. In this system, players are able to fight with virtual ghost through special gun like devices in real environment. The basic issue of the system such as infrared marker indentify, pose estimation, and user's devices are discussed.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131733904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Synthesis of footstep sounds of crowd from single step sound based on cognitive property of footstep sounds 基于脚步声声认知特性的人群脚步声声合成
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759644
T. Kayahara, Hiroki Abe
The crowd sound effect (“Gaya” in Japanese technical word) plays important role to create and perceive the atmosphere of the crowded scene of a movie, but the technique for authoring “Gaya” sound has not been scientifically described so far.
人群音效(日语专业术语“Gaya”)对于营造和感知电影中拥挤场景的氛围起着重要作用,但迄今为止,“Gaya”音效的制作技术还没有得到科学的描述。
{"title":"Synthesis of footstep sounds of crowd from single step sound based on cognitive property of footstep sounds","authors":"T. Kayahara, Hiroki Abe","doi":"10.1109/ISVRI.2011.5759644","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759644","url":null,"abstract":"The crowd sound effect (“Gaya” in Japanese technical word) plays important role to create and perceive the atmosphere of the crowded scene of a movie, but the technique for authoring “Gaya” sound has not been scientifically described so far.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133454665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic adaptation of broad phase collision detection algorithms 宽相位碰撞检测算法的动态自适应
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759599
Quentin Avril, V. Gouranton, B. Arnaldi
In this paper we present a new technique to dynamically adapt the first step (broad phase) of the collision detection process on hardware architecture during simulation. Our approach enables to face the unpredictable evolution of the simulation scenario (this includes addition of complex objects, deletion, split into several objects, …). Our technique of dynamic adaptation is performed on sequential CPU, multi-core, single GPU and multi-GPU architectures. We propose to use off-line simulations to determine fields of optimal performance for broad phase algorithms and use them during in-line simulation. This is achieved by a features analysis of algorithmic performances on different architectures. In this way we ensure the real time adaptation of the broad-phase algorithm during the simulation, switching it to a more appropriate candidate. We also present a study on how graphics hardware parameters (number of cores, bandwidth, …) can influence algorithmic performance. The goal of this analysis is to know if it is possible to find a link between variations of algorithms performances and hardware parameters. We test and compare our model on 1, 2, 4 and 8 cores architectures and also on 1 Quadro FX 3600M, 2 Quadro FX 4600 and 4 Quadro FX 5800. Our results show that using this technique during the collision detection process provides better performance throughout the simulation and enables to face unpredictable scenarios evolution in large-scale virtual environments.
在仿真过程中,提出了一种基于硬件结构动态调整碰撞检测过程第一步(宽相位)的新技术。我们的方法能够面对模拟场景的不可预测的演变(这包括添加复杂对象,删除,拆分为多个对象等)。我们的动态自适应技术可以在顺序CPU、多核、单GPU和多GPU架构上进行。我们建议使用离线模拟来确定宽相位算法的最佳性能领域,并在在线模拟中使用它们。这是通过对不同架构上算法性能的特征分析来实现的。通过这种方式,我们保证了仿真过程中宽相算法的实时适应,将其切换到更合适的候选算法。我们还研究了图形硬件参数(核数、带宽等)如何影响算法性能。此分析的目的是了解是否有可能找到算法性能变化和硬件参数之间的联系。我们在1,2,4和8核架构以及1个Quadro FX 3600M, 2个Quadro FX 4600和4个Quadro FX 5800上测试和比较了我们的模型。我们的研究结果表明,在碰撞检测过程中使用该技术可以在整个仿真过程中提供更好的性能,并能够面对大规模虚拟环境中不可预测的场景演变。
{"title":"Dynamic adaptation of broad phase collision detection algorithms","authors":"Quentin Avril, V. Gouranton, B. Arnaldi","doi":"10.1109/ISVRI.2011.5759599","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759599","url":null,"abstract":"In this paper we present a new technique to dynamically adapt the first step (broad phase) of the collision detection process on hardware architecture during simulation. Our approach enables to face the unpredictable evolution of the simulation scenario (this includes addition of complex objects, deletion, split into several objects, …). Our technique of dynamic adaptation is performed on sequential CPU, multi-core, single GPU and multi-GPU architectures. We propose to use off-line simulations to determine fields of optimal performance for broad phase algorithms and use them during in-line simulation. This is achieved by a features analysis of algorithmic performances on different architectures. In this way we ensure the real time adaptation of the broad-phase algorithm during the simulation, switching it to a more appropriate candidate. We also present a study on how graphics hardware parameters (number of cores, bandwidth, …) can influence algorithmic performance. The goal of this analysis is to know if it is possible to find a link between variations of algorithms performances and hardware parameters. We test and compare our model on 1, 2, 4 and 8 cores architectures and also on 1 Quadro FX 3600M, 2 Quadro FX 4600 and 4 Quadro FX 5800. Our results show that using this technique during the collision detection process provides better performance throughout the simulation and enables to face unpredictable scenarios evolution in large-scale virtual environments.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116730552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Effects of hand feedback fidelity on near space pointing performance and user acceptance 手反馈保真度对近空间指向性能和用户接受度的影响
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759609
A. Pusch, O. Martin, S. Coquillart
In this paper, we report on an experiment conducted to test the effects of different hand representations on near space pointing performance and user preference. Subjects were presented with varying levels of hand realism, including real hand video, a high and a low level 3D hand model and an ordinary 3D pointer arrow. Behavioural data revealed that an abstract hand substitute like a 3D pointer arrow leads to significantly larger position estimation errors in terms of lateral target overshooting when touching virtual surfaces with only visual hand movement constraints. Further, questionnaire results show that a higher fidelity hand is preferred over lower fidelity representations for different aspects of the task. But we cannot conclude that realtime video feedback of the own hand is better rated than a high level static 3D hand model. Overall, these results, which largely confirm previous research, suggest that, although a higher fidelity feedback of the hand is desirable from an user acceptance point of view, motor performance seems not to be affected by varying degrees of limb realism - as long as a hand-like shape is provided.
在本文中,我们报告了一项实验,以测试不同的手表示对近空间指向性能和用户偏好的影响。研究人员向受试者展示了不同程度的手部真实感,包括真实的手部视频、高水平和低水平的3D手部模型以及普通的3D指针箭头。行为数据显示,当触摸只有视觉手部运动约束的虚拟表面时,抽象的手部替代物(如3D指针箭头)在横向目标超冲方面会导致明显更大的位置估计误差。此外,问卷结果显示,在任务的不同方面,较高保真度的手比较低保真度的手更受青睐。但我们不能得出结论,实时视频反馈自己的手比一个高水平的静态3D手模型更好的评价。总的来说,这些结果在很大程度上证实了之前的研究,表明,尽管从用户接受的角度来看,更高保真度的手部反馈是理想的,但运动性能似乎不会受到不同程度的肢体真实感的影响-只要提供类似手的形状。
{"title":"Effects of hand feedback fidelity on near space pointing performance and user acceptance","authors":"A. Pusch, O. Martin, S. Coquillart","doi":"10.1109/ISVRI.2011.5759609","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759609","url":null,"abstract":"In this paper, we report on an experiment conducted to test the effects of different hand representations on near space pointing performance and user preference. Subjects were presented with varying levels of hand realism, including real hand video, a high and a low level 3D hand model and an ordinary 3D pointer arrow. Behavioural data revealed that an abstract hand substitute like a 3D pointer arrow leads to significantly larger position estimation errors in terms of lateral target overshooting when touching virtual surfaces with only visual hand movement constraints. Further, questionnaire results show that a higher fidelity hand is preferred over lower fidelity representations for different aspects of the task. But we cannot conclude that realtime video feedback of the own hand is better rated than a high level static 3D hand model. Overall, these results, which largely confirm previous research, suggest that, although a higher fidelity feedback of the hand is desirable from an user acceptance point of view, motor performance seems not to be affected by varying degrees of limb realism - as long as a hand-like shape is provided.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117069261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
MRStudio: A mixed reality display system for aircraft cockpit MRStudio:一个用于飞机驾驶舱的混合现实显示系统
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759615
Huagen Wan, Song Zou, Zilong Dong, Hai Lin, H. Bao
Mixed reality techniques are boosting for aviation progress. In this paper, we present MRStudio, a mixed reality display system for aircraft cockpit. The system architecture is given, with special attention paid upon such technical issues as three-dimensional map construction for aircraft cockpit, computer vision based 6-DOF head tracking, virtual aircraft cockpit panel construction and registration, and mixed reality display for aircraft cockpit using a flexible client-server architecture. A testing scenario on a full scale mockup of the COMAC's ARJ21 cockpit is described.
混合现实技术正在推动航空技术的进步。本文介绍了一种用于飞机驾驶舱的混合现实显示系统MRStudio。给出了系统架构,重点研究了飞机座舱三维地图构建、基于计算机视觉的六自由度头部跟踪、虚拟飞机座舱面板构建与配准、采用灵活的客户端-服务器架构的飞机座舱混合现实显示等技术问题。本文描述了中国商飞ARJ21座舱全尺寸模型的测试场景。
{"title":"MRStudio: A mixed reality display system for aircraft cockpit","authors":"Huagen Wan, Song Zou, Zilong Dong, Hai Lin, H. Bao","doi":"10.1109/ISVRI.2011.5759615","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759615","url":null,"abstract":"Mixed reality techniques are boosting for aviation progress. In this paper, we present MRStudio, a mixed reality display system for aircraft cockpit. The system architecture is given, with special attention paid upon such technical issues as three-dimensional map construction for aircraft cockpit, computer vision based 6-DOF head tracking, virtual aircraft cockpit panel construction and registration, and mixed reality display for aircraft cockpit using a flexible client-server architecture. A testing scenario on a full scale mockup of the COMAC's ARJ21 cockpit is described.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"50 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123190101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A vibro-tactile system for image contour display 一种图像轮廓显示的振动触觉系统
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759619
Juan Wu, Zhenzhong Song, Wei-Zun Wu, Aiguo Song, D. Constantinescu
This paper presents the design and testing of an image contour display system with vibrotactile array. The tactile image display system is attached on the user's back. It produces non-visual image and permits subjects to determine the position, size, shape of visible objects through vibration stimulus. The system comprises three parts: 1) a USB camera; 2) 48 (6×8) vibrating motors; 3) ARM micro-controlled system. Image is captured with the camera and the 2D contour is extracted and transformed into vibrotactile stimulus with a “contour following” (time-spatial dynamic coding) pattern. With this system subjects could identify the shape of object without special training; meanwhile fewer vibrotactile actuators are adopted. Preliminary experiments were carried out and the results demonstrated that the prototype was satisfactory and efficient for the visually impaired in seeing aid and environment perception.
本文介绍了一种振动触觉阵列图像轮廓显示系统的设计与测试。触觉图像显示系统安装在使用者的背上。它产生非视觉图像,并允许受试者通过振动刺激来确定可见物体的位置、大小、形状。该系统由三部分组成:1)USB摄像头;2) 48台(6×8)振动电机;3) ARM微控系统。用相机捕获图像,提取二维轮廓,并将其转化为具有“轮廓跟随”(时空动态编码)模式的振动触觉刺激。使用该系统,受试者无需经过特殊训练即可识别物体形状;同时采用较少的振动触觉作动器。初步实验结果表明,该原型在视觉辅助和环境感知方面对视障人士是满意和有效的。
{"title":"A vibro-tactile system for image contour display","authors":"Juan Wu, Zhenzhong Song, Wei-Zun Wu, Aiguo Song, D. Constantinescu","doi":"10.1109/ISVRI.2011.5759619","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759619","url":null,"abstract":"This paper presents the design and testing of an image contour display system with vibrotactile array. The tactile image display system is attached on the user's back. It produces non-visual image and permits subjects to determine the position, size, shape of visible objects through vibration stimulus. The system comprises three parts: 1) a USB camera; 2) 48 (6×8) vibrating motors; 3) ARM micro-controlled system. Image is captured with the camera and the 2D contour is extracted and transformed into vibrotactile stimulus with a “contour following” (time-spatial dynamic coding) pattern. With this system subjects could identify the shape of object without special training; meanwhile fewer vibrotactile actuators are adopted. Preliminary experiments were carried out and the results demonstrated that the prototype was satisfactory and efficient for the visually impaired in seeing aid and environment perception.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123491024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Virtual Roommates in multiple shared spaces 多个共享空间中的虚拟室友
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759607
A. Sherstyuk, M. Gavrilova
Augmented Reality applications have already become a part of everyday life, bringing virtual 3D objects into real life scenes. In this paper, we introduce “Virtual Roommates”, a system that employs AR techniques to share people's presence, projected from remote locations. Virtual Roommates is a feature-based mapping between loosely linked spaces. It allows to overlay multiple physical and virtual scenes and populate them with physical or virtual characters. As the name implies, the Virtual Roommates concept provides continuous ambient presence for multiple disparate groups, similar to people sharing living conditions, but without the boundaries of real space.
增强现实应用程序已经成为日常生活的一部分,将虚拟3D物体带入现实生活场景。在本文中,我们介绍了“虚拟室友”,这是一个使用AR技术共享人们从远程位置投影的存在的系统。虚拟室友是松散连接空间之间基于特征的映射。它允许叠加多个物理和虚拟场景,并用物理或虚拟角色填充它们。顾名思义,虚拟室友概念为多个不同的群体提供持续的环境存在,类似于人们共享生活条件,但没有现实空间的界限。
{"title":"Virtual Roommates in multiple shared spaces","authors":"A. Sherstyuk, M. Gavrilova","doi":"10.1109/ISVRI.2011.5759607","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759607","url":null,"abstract":"Augmented Reality applications have already become a part of everyday life, bringing virtual 3D objects into real life scenes. In this paper, we introduce “Virtual Roommates”, a system that employs AR techniques to share people's presence, projected from remote locations. Virtual Roommates is a feature-based mapping between loosely linked spaces. It allows to overlay multiple physical and virtual scenes and populate them with physical or virtual characters. As the name implies, the Virtual Roommates concept provides continuous ambient presence for multiple disparate groups, similar to people sharing living conditions, but without the boundaries of real space.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"4613 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122836680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Research of hand positioning and gesture recognition based on binocular vision 基于双目视觉的手部定位与手势识别研究
Pub Date : 2011-03-19 DOI: 10.1109/ISVRI.2011.5759657
Tong-de Tan, Zhijie Guo
This paper proposes a new method of extracting feature points of hand. The method uses the center of mass of the hand as the match point to calculate the location information of the target based on Mathematical model of binocular visual positioning. The convex hull points of hand contour obtained by image segmentation can be used to identify the different gestures. Furthermore, a system with both functions of locating the three-dimensional position of hand and identifying the appropriate gestures is designed, which can serve as the interface to drive virtual hand to complete manipulation of grasping, moving and releasing virtual objects.
提出了一种新的手部特征点提取方法。该方法基于双目视觉定位的数学模型,以手的质心为匹配点计算目标的位置信息。通过图像分割得到的手部轮廓凸壳点可以用来识别不同的手势。在此基础上,设计了一个具有手部三维位置定位和适当手势识别功能的系统,作为驱动虚拟手完成抓取、移动和释放虚拟物体操作的界面。
{"title":"Research of hand positioning and gesture recognition based on binocular vision","authors":"Tong-de Tan, Zhijie Guo","doi":"10.1109/ISVRI.2011.5759657","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759657","url":null,"abstract":"This paper proposes a new method of extracting feature points of hand. The method uses the center of mass of the hand as the match point to calculate the location information of the target based on Mathematical model of binocular visual positioning. The convex hull points of hand contour obtained by image segmentation can be used to identify the different gestures. Furthermore, a system with both functions of locating the three-dimensional position of hand and identifying the appropriate gestures is designed, which can serve as the interface to drive virtual hand to complete manipulation of grasping, moving and releasing virtual objects.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123939114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
SAMSON: Simulation supported traffic training environment 萨姆森:模拟支持的交通训练环境
Pub Date : 2011-03-19 DOI: 10.3929/ETHZ-A-006397882
Adrian Steinemann, Yves Kellenberger, Pascal Peikert, A. Kunz
Since the publication of the first FTIR multi-touch interaction system [1], the public attention for the field of Single Display Groupware (SDG) has been rising constantly. Lately, former SDG systems with multi-interaction capabilities like DiamondTouch [2], reacTable [3], or Microsoft Surface [4] have been followed by promising systems, i.e. ThinSight [5] and MightyTrace [6]. The latter integrate their tracking technology into commercial liquid crystal displays (LCD), and thus drastically reduce the space requirements. Some recently published work [7] also conveys a trend to support industrial-oriented tasks as systems like BUILDIT [8] did some time ago. We present a traffic training simulator concept based on discrete event simulation to ensure realistic traffic behavior, adequate visualization, and a user centered interaction concept on an SDG system to support training activities for policemen. Within this environment, policemen are able to train their behavior and the adequate choices under different traffic situations much like other professionals train their standard procedures, e.g. pilots in a flight simulator. Such a training environment will give the possibility to learn offline about important characteristics of intersections based on historical data like system stability, incident handling, or additional improvement potential.
自第一台FTIR多点触控交互系统[1]问世以来,公众对单显示群件(Single Display Groupware, SDG)领域的关注度不断上升。最近,以前具有多交互功能的SDG系统,如DiamondTouch [2], reacTable[3]或Microsoft Surface[4],已经被有前途的系统所取代,即ThinSight[5]和MightyTrace[6]。后者将其跟踪技术集成到商用液晶显示器(LCD)中,从而大大减少了空间要求。一些最近发表的工作[7]也传达了一种支持面向工业的任务的趋势,就像BUILDIT[8]这样的系统不久前所做的那样。我们提出了一个基于离散事件模拟的交通训练模拟器概念,以确保真实的交通行为,充分的可视化,并在可持续发展目标系统上提出了以用户为中心的交互概念,以支持警察的培训活动。在这种环境下,警察能够训练他们在不同交通情况下的行为和适当的选择,就像其他专业人员在飞行模拟器中训练他们的标准程序一样。这样的训练环境将提供离线学习基于历史数据的十字路口的重要特征的可能性,如系统稳定性、事件处理或其他改进潜力。
{"title":"SAMSON: Simulation supported traffic training environment","authors":"Adrian Steinemann, Yves Kellenberger, Pascal Peikert, A. Kunz","doi":"10.3929/ETHZ-A-006397882","DOIUrl":"https://doi.org/10.3929/ETHZ-A-006397882","url":null,"abstract":"Since the publication of the first FTIR multi-touch interaction system [1], the public attention for the field of Single Display Groupware (SDG) has been rising constantly. Lately, former SDG systems with multi-interaction capabilities like DiamondTouch [2], reacTable [3], or Microsoft Surface [4] have been followed by promising systems, i.e. ThinSight [5] and MightyTrace [6]. The latter integrate their tracking technology into commercial liquid crystal displays (LCD), and thus drastically reduce the space requirements. Some recently published work [7] also conveys a trend to support industrial-oriented tasks as systems like BUILDIT [8] did some time ago. We present a traffic training simulator concept based on discrete event simulation to ensure realistic traffic behavior, adequate visualization, and a user centered interaction concept on an SDG system to support training activities for policemen. Within this environment, policemen are able to train their behavior and the adequate choices under different traffic situations much like other professionals train their standard procedures, e.g. pilots in a flight simulator. Such a training environment will give the possibility to learn offline about important characteristics of intersections based on historical data like system stability, incident handling, or additional improvement potential.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125014798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2011 IEEE International Symposium on VR Innovation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1