首页 > 最新文献

2017 International Conference on Virtual Reality and Visualization (ICVRV)最新文献

英文 中文
A Recognition Method of Misjudgment Gesture Based on Convolutional Neural Network 基于卷积神经网络的误判手势识别方法
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00062
Kaiyun Sun, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Xiaopei Guo
Based on the Kinect 2.0, 17 kinds of static gesture libraries were established and trained by Convolutional Neural Network. A lot of statistical experiments have been done on the classification of each gesture. During the experiment, we found a phenomenon that several gestures in the 17 gestures were easily confused. And for the sake of description, we call these gestures as similarity gestures. It is assumed that the test result of convolutional neural network model satisfies the large number theorem from the angle of large data. Therefore, For misjudgment gestures, this paper presents a recognition method based on probability statistics.
基于Kinect 2.0,建立了17种静态手势库,并采用卷积神经网络进行训练。对每个手势的分类已经做了大量的统计实验。在实验过程中,我们发现了一个现象,17个手势中有几个手势很容易混淆。为了便于描述,我们称这些手势为相似手势。从大数据的角度出发,假设卷积神经网络模型的测试结果满足大数定理。因此,针对误判手势,本文提出了一种基于概率统计的识别方法。
{"title":"A Recognition Method of Misjudgment Gesture Based on Convolutional Neural Network","authors":"Kaiyun Sun, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Xiaopei Guo","doi":"10.1109/ICVRV.2017.00062","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00062","url":null,"abstract":"Based on the Kinect 2.0, 17 kinds of static gesture libraries were established and trained by Convolutional Neural Network. A lot of statistical experiments have been done on the classification of each gesture. During the experiment, we found a phenomenon that several gestures in the 17 gestures were easily confused. And for the sake of description, we call these gestures as similarity gestures. It is assumed that the test result of convolutional neural network model satisfies the large number theorem from the angle of large data. Therefore, For misjudgment gestures, this paper presents a recognition method based on probability statistics.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Thangka Image Scene Switching Based on VR 基于VR的唐卡图像场景切换研究
Pub Date : 2017-10-01 DOI: 10.1109/icvrv.2017.00103
Jianbang Jia, Chuan-qian Tang, Shou-Liang Tang, Huan Wu, Xiaojing Liu, Zhiqiang Liu
With the development of computer simulation technology and computer graphics, virtual reality (VR) has become the hotspot and difficulty in the current world research. This paper embarks from the actual and presents a Thangka image browsing research based on VR. The second order gradient enhancement of Sobel operator algorithm, maximum entropy segmentation algorithm, the most gray value segmentation algorithm and point to linear symmetry method are used to realize the VR-based Thangka image scene switching. Experimental results show that the processing time obtained through Leap Motion is 20-30 ms/frame, and the accuracy of rigid body region detection is more than 70%. It can basically meet the requirements of real-time and accurate handoff of Thangka image scene.
随着计算机仿真技术和计算机图形学的发展,虚拟现实(VR)已成为当今世界研究的热点和难点。本文从实际出发,提出了一种基于VR的唐卡图像浏览研究。采用Sobel算子二阶梯度增强算法、最大熵分割算法、最大灰度值分割算法和点对线对称法实现了基于vr的唐卡图像场景切换。实验结果表明,通过Leap Motion获得的处理时间为20 ~ 30 ms/帧,刚体区域检测精度达70%以上。基本能满足唐卡图像场景实时、准确切换的要求。
{"title":"Research on Thangka Image Scene Switching Based on VR","authors":"Jianbang Jia, Chuan-qian Tang, Shou-Liang Tang, Huan Wu, Xiaojing Liu, Zhiqiang Liu","doi":"10.1109/icvrv.2017.00103","DOIUrl":"https://doi.org/10.1109/icvrv.2017.00103","url":null,"abstract":"With the development of computer simulation technology and computer graphics, virtual reality (VR) has become the hotspot and difficulty in the current world research. This paper embarks from the actual and presents a Thangka image browsing research based on VR. The second order gradient enhancement of Sobel operator algorithm, maximum entropy segmentation algorithm, the most gray value segmentation algorithm and point to linear symmetry method are used to realize the VR-based Thangka image scene switching. Experimental results show that the processing time obtained through Leap Motion is 20-30 ms/frame, and the accuracy of rigid body region detection is more than 70%. It can basically meet the requirements of real-time and accurate handoff of Thangka image scene.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"89 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129752675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Wide Baseline Image Stitching with Structure-Preserving 基于结构保持的宽基线图像拼接
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00050
Mingjun Cao, Wei Lyu, Zhong Zhou, Wei Wu
This paper presents a novel stitching approach for wide-baseline images with low texture. Firstly, a three-phase feature matching model is applied to extract rich and reliable feature matching, in the case of low texture, our line matching and contour matching will compensate for the poor quality of point matching. Then, a structure-preserving warping is performed, by defining several constraints and minimizing the objective function to solve the optimal mesh, with which we obtain multiple affine matrices to warp images. Furthermore, we synthetically consider alignment error, color difference and saliency difference to find the optimal seam for image blending. Experiments both on common data sets and challenging surveillance scenes illustrate the effectiveness of the proposed method, and our approach has outstanding performance when compared with other state-of-the-art methods.
针对低纹理的宽基线图像,提出了一种新的拼接方法。首先,采用三相特征匹配模型提取丰富可靠的特征匹配,在纹理较低的情况下,我们的线匹配和轮廓匹配将弥补点匹配质量较差的缺陷。然后,通过定义若干约束条件和最小化目标函数来求解最优网格,从而得到多个仿射矩阵进行图像翘曲。在此基础上,综合考虑对齐误差、色差和显著性差等因素,寻找图像融合的最佳接缝。在常见数据集和具有挑战性的监控场景上的实验都证明了所提出方法的有效性,并且与其他最先进的方法相比,我们的方法具有出色的性能。
{"title":"Wide Baseline Image Stitching with Structure-Preserving","authors":"Mingjun Cao, Wei Lyu, Zhong Zhou, Wei Wu","doi":"10.1109/ICVRV.2017.00050","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00050","url":null,"abstract":"This paper presents a novel stitching approach for wide-baseline images with low texture. Firstly, a three-phase feature matching model is applied to extract rich and reliable feature matching, in the case of low texture, our line matching and contour matching will compensate for the poor quality of point matching. Then, a structure-preserving warping is performed, by defining several constraints and minimizing the objective function to solve the optimal mesh, with which we obtain multiple affine matrices to warp images. Furthermore, we synthetically consider alignment error, color difference and saliency difference to find the optimal seam for image blending. Experiments both on common data sets and challenging surveillance scenes illustrate the effectiveness of the proposed method, and our approach has outstanding performance when compared with other state-of-the-art methods.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126005358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Avatars' Skeleton Connection and Movement Data Network Synchronization 虚拟角色的骨架连接和移动数据网络同步
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00096
Q. Qi, Sanyuan Zhao, Shuai Wang, Linjing Lai, Zhengchao Lei, Hongmei Song
As the rapid development of computer graphics and network, virtual reality technology begin to penetrate the field of social network. Due to the limited function of single virtual reality device, the combination of different devices is a feasible approach to immerse people in the real world. In this paper, we novelly combine Kinect v1 with Leap Motion sensor for whole-body gesture capturing and overcome two difficulties, avatars' skeleton connection and movement data synchronization. The experiment approves that our method performs well. It could be a meaningful contribution to the future multi-player interactive virtual social platform.
随着计算机图形学和网络技术的飞速发展,虚拟现实技术开始渗透到社交网络领域。由于单一虚拟现实设备的功能有限,不同设备的组合是一种让人沉浸在现实世界中的可行方法。本文新颖地将Kinect v1与Leap Motion传感器相结合,实现全身动作捕捉,克服了虚拟角色的骨骼连接和动作数据同步两大难点。实验证明了该方法的有效性。这将为未来的多人互动虚拟社交平台做出有意义的贡献。
{"title":"Avatars' Skeleton Connection and Movement Data Network Synchronization","authors":"Q. Qi, Sanyuan Zhao, Shuai Wang, Linjing Lai, Zhengchao Lei, Hongmei Song","doi":"10.1109/ICVRV.2017.00096","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00096","url":null,"abstract":"As the rapid development of computer graphics and network, virtual reality technology begin to penetrate the field of social network. Due to the limited function of single virtual reality device, the combination of different devices is a feasible approach to immerse people in the real world. In this paper, we novelly combine Kinect v1 with Leap Motion sensor for whole-body gesture capturing and overcome two difficulties, avatars' skeleton connection and movement data synchronization. The experiment approves that our method performs well. It could be a meaningful contribution to the future multi-player interactive virtual social platform.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"448 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125779842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Viscoelastic Fluid Simulation and Solid Melting Process Based on AVR-SPH 基于AVR-SPH的粘弹性流体实时仿真及固体熔融过程
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00092
Mingjing Ai, Baohe Chen, Qunfang Yang
In this paper, we propose the artificial viscosity relaxation (AVR) model based on the SPH method to simulate the viscosity of the fluid. This model modifies the velocity of adjacent particle pairs by introducing a velocity relaxation amount, thus realizing the update of velocity and simulating the motion of fluid. We also apply the improved method to realize the complete process of the solid melting. And as the experiment results shows, the proposed method greatly simplifies the calculation and reduces the calculation amount, and it can reach higher frame rate in the case of the same number of particles.
本文提出了基于SPH方法的人工粘度松弛(AVR)模型来模拟流体的粘度。该模型通过引入速度松弛量来修正相邻粒子对的速度,从而实现速度的更新,模拟流体的运动。并应用改进的方法实现了固体熔炼的全过程。实验结果表明,该方法大大简化了计算,减少了计算量,在相同粒子数的情况下可以达到更高的帧率。
{"title":"Real-Time Viscoelastic Fluid Simulation and Solid Melting Process Based on AVR-SPH","authors":"Mingjing Ai, Baohe Chen, Qunfang Yang","doi":"10.1109/ICVRV.2017.00092","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00092","url":null,"abstract":"In this paper, we propose the artificial viscosity relaxation (AVR) model based on the SPH method to simulate the viscosity of the fluid. This model modifies the velocity of adjacent particle pairs by introducing a velocity relaxation amount, thus realizing the update of velocity and simulating the motion of fluid. We also apply the improved method to realize the complete process of the solid melting. And as the experiment results shows, the proposed method greatly simplifies the calculation and reduces the calculation amount, and it can reach higher frame rate in the case of the same number of particles.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125972489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Mesh Segmentation with Perception-Aware Cuts 改进的网格分割与感知切割
Pub Date : 2017-10-01 DOI: 10.1109/icvrv.2017.00028
Tianhao Gao, Wencheng Wang, B. Zhu
High quality mesh segmentation depends on high quality cuts. Unfortunately, the cuts produced by existing methods are not very satisfactory, since their global measurements tend to ignore effects of local features, while their local measurements would enlarge the influences from facet details by error accumulation. We observe that the cuts preferred to by human beings are much more dependent on the overall characteristics of local regions, a kind of intermediate-level features, especially in concave regions. Thus, we present a construct to enhance representation of overall characteristics in concave regions for improving cut initialization in concave regions, and design novel energy functions, mainly by intermediate-level features, for extending cutting lines to be enclosed. Then, based on the obtained closed cutting lines, we perform meaningful mesh segmentation in a bottom-up manner according to application requirements. In comparison with state-of-the-art methods, we can have cuts produced more preferred to by human beings, as shown by the experimental results on a benchmark.
高质量的网格分割依赖于高质量的切割。遗憾的是,现有方法产生的切割结果并不令人满意,因为它们的全局测量往往忽略了局部特征的影响,而它们的局部测量会通过误差累积放大facet细节的影响。我们观察到,人类偏好的切割更多地依赖于局部区域的整体特征,这是一种中级特征,特别是凹区域。因此,我们提出了一种增强凹区域整体特征表示的结构,以改善凹区域的切割初始化,并设计了一种新的能量函数,主要是通过中级特征来扩展切割线以封闭。然后,基于得到的闭合切割线,根据应用需求,自下而上进行有意义的网格分割。与最先进的方法相比,我们可以得到人类更喜欢的切割,正如基准实验结果所示。
{"title":"Improved Mesh Segmentation with Perception-Aware Cuts","authors":"Tianhao Gao, Wencheng Wang, B. Zhu","doi":"10.1109/icvrv.2017.00028","DOIUrl":"https://doi.org/10.1109/icvrv.2017.00028","url":null,"abstract":"High quality mesh segmentation depends on high quality cuts. Unfortunately, the cuts produced by existing methods are not very satisfactory, since their global measurements tend to ignore effects of local features, while their local measurements would enlarge the influences from facet details by error accumulation. We observe that the cuts preferred to by human beings are much more dependent on the overall characteristics of local regions, a kind of intermediate-level features, especially in concave regions. Thus, we present a construct to enhance representation of overall characteristics in concave regions for improving cut initialization in concave regions, and design novel energy functions, mainly by intermediate-level features, for extending cutting lines to be enclosed. Then, based on the obtained closed cutting lines, we perform meaningful mesh segmentation in a bottom-up manner according to application requirements. In comparison with state-of-the-art methods, we can have cuts produced more preferred to by human beings, as shown by the experimental results on a benchmark.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128181800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Under the Movement of Head: Evaluating Visual Attention in Immersive Virtual Reality Environment 头部运动下:沉浸式虚拟现实环境中视觉注意力的评估
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00067
Honglei Han, Aidong Lu, U. Wells
A method to measure what and how deep the user can perceive in immersive virtual reality environments is proposed. A preliminary user study was carried out to verify that user gaze behaviors have some specific differences in immersive virtual reality environments compared with that in traditional non-immersive virtual reality environments base on 2D monitors and interactive hardware. Analyzed from the user study result, the user gaze behavior in immersive virtual reality environments is more likely to move their head to let interested object locates in the center of the view, while in non-immersive virtual reality environments the user tends to move their own eyes and only move the avatar's head when necessary. Base on this finding, a quantitative equation is proposed to measure the user's attention in immersive virtual reality environments. It can be used into a quality evaluate system to help designers find out design issues in the scene that reduce the effectiveness of the narrative.
提出了一种在沉浸式虚拟现实环境中测量用户感知内容和感知深度的方法。通过初步的用户研究,验证了基于2D显示器和交互式硬件的沉浸式虚拟现实环境中用户凝视行为与传统非沉浸式虚拟现实环境中的用户凝视行为存在一些具体差异。从用户研究结果分析,沉浸式虚拟现实环境下的用户凝视行为更倾向于移动自己的头部,让感兴趣的物体位于视野中心,而非沉浸式虚拟现实环境下的用户则倾向于移动自己的眼睛,只有在必要时才移动虚拟角色的头部。基于这一发现,我们提出了一个定量方程来衡量沉浸式虚拟现实环境中用户的注意力。它可以作为一种质量评估系统,帮助设计师发现场景中降低叙事效果的设计问题。
{"title":"Under the Movement of Head: Evaluating Visual Attention in Immersive Virtual Reality Environment","authors":"Honglei Han, Aidong Lu, U. Wells","doi":"10.1109/ICVRV.2017.00067","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00067","url":null,"abstract":"A method to measure what and how deep the user can perceive in immersive virtual reality environments is proposed. A preliminary user study was carried out to verify that user gaze behaviors have some specific differences in immersive virtual reality environments compared with that in traditional non-immersive virtual reality environments base on 2D monitors and interactive hardware. Analyzed from the user study result, the user gaze behavior in immersive virtual reality environments is more likely to move their head to let interested object locates in the center of the view, while in non-immersive virtual reality environments the user tends to move their own eyes and only move the avatar's head when necessary. Base on this finding, a quantitative equation is proposed to measure the user's attention in immersive virtual reality environments. It can be used into a quality evaluate system to help designers find out design issues in the scene that reduce the effectiveness of the narrative.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124363840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Research on Hydrodynamic Forces of KCS Container Ship Based on Numerical Analysis 基于数值分析的KCS集装箱船水动力研究
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00076
Jing-jing Lian, Xiao Yang
RANS simulation of the flow past the KCS Container Ship with the prescribed direct motion is performed. The commercial CFD solver FLUENT is employed to compute the RANS equations. The RNG Turbulent model is adopted in the computation. The SIMPLE algorithm is used to couple the velocity and pressure in the governing equations. The ship direct motions under different Froude numbers are simulated to obtain the flow field around ship and ship resistance. The computational results with and without the free surface are obtained respectively. The validations are presented through comparing the numerical results with experimental results of MOERI.
对KCS集装箱船在规定的直接运动条件下的流场进行了RANS仿真。采用商用CFD求解器FLUENT计算RANS方程。计算中采用RNG紊流模型。采用SIMPLE算法对控制方程中的速度和压力进行耦合。模拟了不同弗劳德数下船舶的直接运动,得到了船周流场和船舶阻力。分别得到了有自由曲面和无自由曲面的计算结果。并将数值结果与MOERI的实验结果进行了比较,验证了该方法的正确性。
{"title":"Research on Hydrodynamic Forces of KCS Container Ship Based on Numerical Analysis","authors":"Jing-jing Lian, Xiao Yang","doi":"10.1109/ICVRV.2017.00076","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00076","url":null,"abstract":"RANS simulation of the flow past the KCS Container Ship with the prescribed direct motion is performed. The commercial CFD solver FLUENT is employed to compute the RANS equations. The RNG Turbulent model is adopted in the computation. The SIMPLE algorithm is used to couple the velocity and pressure in the governing equations. The ship direct motions under different Froude numbers are simulated to obtain the flow field around ship and ship resistance. The computational results with and without the free surface are obtained respectively. The validations are presented through comparing the numerical results with experimental results of MOERI.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114625051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mixed Reality Application: A Framework of Markerless Assembly Guidance System with Hololens Glass 混合现实应用:基于全息玻璃的无标记装配制导系统框架
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00110
Zhu Teng, He Hanwu, Wu Yueming, Chen He-en, Chen Yongbin
We proposed a markerless MR guidance system for the manufactory assembly, which utilizes augmented reality device and camera sensor to display virtual model in designated position in the real world. By taking advantage of image processing methods. The system can automatically detect the location of device and target. The application result in real manufactory scene shows that the guidance system performs well and can track the changing of the posture of target products in less than 200ms, and then adjust the virtual models into right position coordinately.
提出了一种用于制造装配的无标记MR制导系统,该系统利用增强现实设备和相机传感器在现实世界中指定位置显示虚拟模型。利用图像处理的方法。该系统能够自动检测设备和目标的位置。在实际生产场景中的应用结果表明,该制导系统性能良好,能在不到200ms的时间内跟踪目标产品的姿态变化,并将虚拟模型协调调整到正确位置。
{"title":"Mixed Reality Application: A Framework of Markerless Assembly Guidance System with Hololens Glass","authors":"Zhu Teng, He Hanwu, Wu Yueming, Chen He-en, Chen Yongbin","doi":"10.1109/ICVRV.2017.00110","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00110","url":null,"abstract":"We proposed a markerless MR guidance system for the manufactory assembly, which utilizes augmented reality device and camera sensor to display virtual model in designated position in the real world. By taking advantage of image processing methods. The system can automatically detect the location of device and target. The application result in real manufactory scene shows that the guidance system performs well and can track the changing of the posture of target products in less than 200ms, and then adjust the virtual models into right position coordinately.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"86 22","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131878565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Master-Slave Hand System for Virtual Reality Interaction 虚拟现实交互的主从手系统
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00064
Yang Wenzhen, Dong Lujie, Yan Ming, Wu Xinli, Jiang Zhaona, Pan Zhigeng
Manipulating virtual objects using our real hands is a great challenge for the virtual reality community. We present a master-slave hand system to naturally manipulate virtual objects with a user's hand. The master-slave hand system can obtain the position, orientation and finger joint angle of the user's hand, which is used to drive a dexterous virtual hand to interact with virtual environments. The dexterous virtual hand we modeled has analogous motion function of the real hand. Simplified virtual hand manipulation intentions we defined help to the dexterous virtual hand manipulating virtual objects conveniently. A virtual assembly system prototype validates that this master-slave hand system attains intuitive and flexible hands-on interaction with virtual environments.
用我们真实的手操纵虚拟物体对虚拟现实社区来说是一个巨大的挑战。我们提出了一个主从手系统,可以用用户的手自然地操纵虚拟物体。主从手系统可以获取用户手的位置、方向和手指关节角度,用于驱动灵巧的虚拟手与虚拟环境进行交互。所建立的灵巧虚拟手具有与真手相似的运动功能。定义了简化的虚拟手操作意图,有助于灵巧的虚拟手方便地操作虚拟物体。虚拟装配系统原型验证了该主从手系统与虚拟环境实现了直观、灵活的动手交互。
{"title":"A Master-Slave Hand System for Virtual Reality Interaction","authors":"Yang Wenzhen, Dong Lujie, Yan Ming, Wu Xinli, Jiang Zhaona, Pan Zhigeng","doi":"10.1109/ICVRV.2017.00064","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00064","url":null,"abstract":"Manipulating virtual objects using our real hands is a great challenge for the virtual reality community. We present a master-slave hand system to naturally manipulate virtual objects with a user's hand. The master-slave hand system can obtain the position, orientation and finger joint angle of the user's hand, which is used to drive a dexterous virtual hand to interact with virtual environments. The dexterous virtual hand we modeled has analogous motion function of the real hand. Simplified virtual hand manipulation intentions we defined help to the dexterous virtual hand manipulating virtual objects conveniently. A virtual assembly system prototype validates that this master-slave hand system attains intuitive and flexible hands-on interaction with virtual environments.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134634412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2017 International Conference on Virtual Reality and Visualization (ICVRV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1