首页 > 最新文献

2017 International Conference on Virtual Reality and Visualization (ICVRV)最新文献

英文 中文
Under the Movement of Head: Evaluating Visual Attention in Immersive Virtual Reality Environment 头部运动下:沉浸式虚拟现实环境中视觉注意力的评估
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00067
Honglei Han, Aidong Lu, U. Wells
A method to measure what and how deep the user can perceive in immersive virtual reality environments is proposed. A preliminary user study was carried out to verify that user gaze behaviors have some specific differences in immersive virtual reality environments compared with that in traditional non-immersive virtual reality environments base on 2D monitors and interactive hardware. Analyzed from the user study result, the user gaze behavior in immersive virtual reality environments is more likely to move their head to let interested object locates in the center of the view, while in non-immersive virtual reality environments the user tends to move their own eyes and only move the avatar's head when necessary. Base on this finding, a quantitative equation is proposed to measure the user's attention in immersive virtual reality environments. It can be used into a quality evaluate system to help designers find out design issues in the scene that reduce the effectiveness of the narrative.
提出了一种在沉浸式虚拟现实环境中测量用户感知内容和感知深度的方法。通过初步的用户研究,验证了基于2D显示器和交互式硬件的沉浸式虚拟现实环境中用户凝视行为与传统非沉浸式虚拟现实环境中的用户凝视行为存在一些具体差异。从用户研究结果分析,沉浸式虚拟现实环境下的用户凝视行为更倾向于移动自己的头部,让感兴趣的物体位于视野中心,而非沉浸式虚拟现实环境下的用户则倾向于移动自己的眼睛,只有在必要时才移动虚拟角色的头部。基于这一发现,我们提出了一个定量方程来衡量沉浸式虚拟现实环境中用户的注意力。它可以作为一种质量评估系统,帮助设计师发现场景中降低叙事效果的设计问题。
{"title":"Under the Movement of Head: Evaluating Visual Attention in Immersive Virtual Reality Environment","authors":"Honglei Han, Aidong Lu, U. Wells","doi":"10.1109/ICVRV.2017.00067","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00067","url":null,"abstract":"A method to measure what and how deep the user can perceive in immersive virtual reality environments is proposed. A preliminary user study was carried out to verify that user gaze behaviors have some specific differences in immersive virtual reality environments compared with that in traditional non-immersive virtual reality environments base on 2D monitors and interactive hardware. Analyzed from the user study result, the user gaze behavior in immersive virtual reality environments is more likely to move their head to let interested object locates in the center of the view, while in non-immersive virtual reality environments the user tends to move their own eyes and only move the avatar's head when necessary. Base on this finding, a quantitative equation is proposed to measure the user's attention in immersive virtual reality environments. It can be used into a quality evaluate system to help designers find out design issues in the scene that reduce the effectiveness of the narrative.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124363840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Trcollage: Efficient Image Collage Using Tree-Based Layer Reordering Trcollage:高效的图像拼贴使用基于树的图层重新排序
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00120
Shiguang Liu, Xiaobing Wang, Ping Li, Jun-yong Noh
This paper proposes an efficient image collage approach called TrCollage using tree-based layer reordering, which takes into account not only the efficiency of the collage processing but also the quality of the final collage effects. Besides, our tree-based TrCollage also meets the enhanced demands from the rapid development of mobile technology, which call for robust solutions for efficient picture collage without computation-intensive processing, e.g. graph-cut, saliency detection. The experimental results have shown the efficiency and effectiveness of our tree-based TrCollage with high-quality image collage effects applying layer reordering.
本文提出了一种高效的图像拼贴方法,称为TrCollage,该方法使用基于树的图层重新排序,不仅考虑了拼贴处理的效率,而且考虑了最终拼贴效果的质量。此外,我们基于树的TrCollage也满足了移动技术快速发展的需求,这需要强大的解决方案来实现高效的图像拼贴,而无需计算密集型的处理,例如图形切割,显著性检测。实验结果证明了基于树的TrCollage算法的效率和有效性,并应用图层重排序实现了高质量的图像拼贴效果。
{"title":"Trcollage: Efficient Image Collage Using Tree-Based Layer Reordering","authors":"Shiguang Liu, Xiaobing Wang, Ping Li, Jun-yong Noh","doi":"10.1109/ICVRV.2017.00120","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00120","url":null,"abstract":"This paper proposes an efficient image collage approach called TrCollage using tree-based layer reordering, which takes into account not only the efficiency of the collage processing but also the quality of the final collage effects. Besides, our tree-based TrCollage also meets the enhanced demands from the rapid development of mobile technology, which call for robust solutions for efficient picture collage without computation-intensive processing, e.g. graph-cut, saliency detection. The experimental results have shown the efficiency and effectiveness of our tree-based TrCollage with high-quality image collage effects applying layer reordering.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117290489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improved Mesh Segmentation with Perception-Aware Cuts 改进的网格分割与感知切割
Pub Date : 2017-10-01 DOI: 10.1109/icvrv.2017.00028
Tianhao Gao, Wencheng Wang, B. Zhu
High quality mesh segmentation depends on high quality cuts. Unfortunately, the cuts produced by existing methods are not very satisfactory, since their global measurements tend to ignore effects of local features, while their local measurements would enlarge the influences from facet details by error accumulation. We observe that the cuts preferred to by human beings are much more dependent on the overall characteristics of local regions, a kind of intermediate-level features, especially in concave regions. Thus, we present a construct to enhance representation of overall characteristics in concave regions for improving cut initialization in concave regions, and design novel energy functions, mainly by intermediate-level features, for extending cutting lines to be enclosed. Then, based on the obtained closed cutting lines, we perform meaningful mesh segmentation in a bottom-up manner according to application requirements. In comparison with state-of-the-art methods, we can have cuts produced more preferred to by human beings, as shown by the experimental results on a benchmark.
高质量的网格分割依赖于高质量的切割。遗憾的是,现有方法产生的切割结果并不令人满意,因为它们的全局测量往往忽略了局部特征的影响,而它们的局部测量会通过误差累积放大facet细节的影响。我们观察到,人类偏好的切割更多地依赖于局部区域的整体特征,这是一种中级特征,特别是凹区域。因此,我们提出了一种增强凹区域整体特征表示的结构,以改善凹区域的切割初始化,并设计了一种新的能量函数,主要是通过中级特征来扩展切割线以封闭。然后,基于得到的闭合切割线,根据应用需求,自下而上进行有意义的网格分割。与最先进的方法相比,我们可以得到人类更喜欢的切割,正如基准实验结果所示。
{"title":"Improved Mesh Segmentation with Perception-Aware Cuts","authors":"Tianhao Gao, Wencheng Wang, B. Zhu","doi":"10.1109/icvrv.2017.00028","DOIUrl":"https://doi.org/10.1109/icvrv.2017.00028","url":null,"abstract":"High quality mesh segmentation depends on high quality cuts. Unfortunately, the cuts produced by existing methods are not very satisfactory, since their global measurements tend to ignore effects of local features, while their local measurements would enlarge the influences from facet details by error accumulation. We observe that the cuts preferred to by human beings are much more dependent on the overall characteristics of local regions, a kind of intermediate-level features, especially in concave regions. Thus, we present a construct to enhance representation of overall characteristics in concave regions for improving cut initialization in concave regions, and design novel energy functions, mainly by intermediate-level features, for extending cutting lines to be enclosed. Then, based on the obtained closed cutting lines, we perform meaningful mesh segmentation in a bottom-up manner according to application requirements. In comparison with state-of-the-art methods, we can have cuts produced more preferred to by human beings, as shown by the experimental results on a benchmark.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128181800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Recognition Method of Misjudgment Gesture Based on Convolutional Neural Network 基于卷积神经网络的误判手势识别方法
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00062
Kaiyun Sun, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Xiaopei Guo
Based on the Kinect 2.0, 17 kinds of static gesture libraries were established and trained by Convolutional Neural Network. A lot of statistical experiments have been done on the classification of each gesture. During the experiment, we found a phenomenon that several gestures in the 17 gestures were easily confused. And for the sake of description, we call these gestures as similarity gestures. It is assumed that the test result of convolutional neural network model satisfies the large number theorem from the angle of large data. Therefore, For misjudgment gestures, this paper presents a recognition method based on probability statistics.
基于Kinect 2.0,建立了17种静态手势库,并采用卷积神经网络进行训练。对每个手势的分类已经做了大量的统计实验。在实验过程中,我们发现了一个现象,17个手势中有几个手势很容易混淆。为了便于描述,我们称这些手势为相似手势。从大数据的角度出发,假设卷积神经网络模型的测试结果满足大数定理。因此,针对误判手势,本文提出了一种基于概率统计的识别方法。
{"title":"A Recognition Method of Misjudgment Gesture Based on Convolutional Neural Network","authors":"Kaiyun Sun, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Xiaopei Guo","doi":"10.1109/ICVRV.2017.00062","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00062","url":null,"abstract":"Based on the Kinect 2.0, 17 kinds of static gesture libraries were established and trained by Convolutional Neural Network. A lot of statistical experiments have been done on the classification of each gesture. During the experiment, we found a phenomenon that several gestures in the 17 gestures were easily confused. And for the sake of description, we call these gestures as similarity gestures. It is assumed that the test result of convolutional neural network model satisfies the large number theorem from the angle of large data. Therefore, For misjudgment gestures, this paper presents a recognition method based on probability statistics.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Avatars' Skeleton Connection and Movement Data Network Synchronization 虚拟角色的骨架连接和移动数据网络同步
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00096
Q. Qi, Sanyuan Zhao, Shuai Wang, Linjing Lai, Zhengchao Lei, Hongmei Song
As the rapid development of computer graphics and network, virtual reality technology begin to penetrate the field of social network. Due to the limited function of single virtual reality device, the combination of different devices is a feasible approach to immerse people in the real world. In this paper, we novelly combine Kinect v1 with Leap Motion sensor for whole-body gesture capturing and overcome two difficulties, avatars' skeleton connection and movement data synchronization. The experiment approves that our method performs well. It could be a meaningful contribution to the future multi-player interactive virtual social platform.
随着计算机图形学和网络技术的飞速发展,虚拟现实技术开始渗透到社交网络领域。由于单一虚拟现实设备的功能有限,不同设备的组合是一种让人沉浸在现实世界中的可行方法。本文新颖地将Kinect v1与Leap Motion传感器相结合,实现全身动作捕捉,克服了虚拟角色的骨骼连接和动作数据同步两大难点。实验证明了该方法的有效性。这将为未来的多人互动虚拟社交平台做出有意义的贡献。
{"title":"Avatars' Skeleton Connection and Movement Data Network Synchronization","authors":"Q. Qi, Sanyuan Zhao, Shuai Wang, Linjing Lai, Zhengchao Lei, Hongmei Song","doi":"10.1109/ICVRV.2017.00096","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00096","url":null,"abstract":"As the rapid development of computer graphics and network, virtual reality technology begin to penetrate the field of social network. Due to the limited function of single virtual reality device, the combination of different devices is a feasible approach to immerse people in the real world. In this paper, we novelly combine Kinect v1 with Leap Motion sensor for whole-body gesture capturing and overcome two difficulties, avatars' skeleton connection and movement data synchronization. The experiment approves that our method performs well. It could be a meaningful contribution to the future multi-player interactive virtual social platform.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"448 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125779842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Viscoelastic Fluid Simulation and Solid Melting Process Based on AVR-SPH 基于AVR-SPH的粘弹性流体实时仿真及固体熔融过程
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00092
Mingjing Ai, Baohe Chen, Qunfang Yang
In this paper, we propose the artificial viscosity relaxation (AVR) model based on the SPH method to simulate the viscosity of the fluid. This model modifies the velocity of adjacent particle pairs by introducing a velocity relaxation amount, thus realizing the update of velocity and simulating the motion of fluid. We also apply the improved method to realize the complete process of the solid melting. And as the experiment results shows, the proposed method greatly simplifies the calculation and reduces the calculation amount, and it can reach higher frame rate in the case of the same number of particles.
本文提出了基于SPH方法的人工粘度松弛(AVR)模型来模拟流体的粘度。该模型通过引入速度松弛量来修正相邻粒子对的速度,从而实现速度的更新,模拟流体的运动。并应用改进的方法实现了固体熔炼的全过程。实验结果表明,该方法大大简化了计算,减少了计算量,在相同粒子数的情况下可以达到更高的帧率。
{"title":"Real-Time Viscoelastic Fluid Simulation and Solid Melting Process Based on AVR-SPH","authors":"Mingjing Ai, Baohe Chen, Qunfang Yang","doi":"10.1109/ICVRV.2017.00092","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00092","url":null,"abstract":"In this paper, we propose the artificial viscosity relaxation (AVR) model based on the SPH method to simulate the viscosity of the fluid. This model modifies the velocity of adjacent particle pairs by introducing a velocity relaxation amount, thus realizing the update of velocity and simulating the motion of fluid. We also apply the improved method to realize the complete process of the solid melting. And as the experiment results shows, the proposed method greatly simplifies the calculation and reduces the calculation amount, and it can reach higher frame rate in the case of the same number of particles.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125972489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wide Baseline Image Stitching with Structure-Preserving 基于结构保持的宽基线图像拼接
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00050
Mingjun Cao, Wei Lyu, Zhong Zhou, Wei Wu
This paper presents a novel stitching approach for wide-baseline images with low texture. Firstly, a three-phase feature matching model is applied to extract rich and reliable feature matching, in the case of low texture, our line matching and contour matching will compensate for the poor quality of point matching. Then, a structure-preserving warping is performed, by defining several constraints and minimizing the objective function to solve the optimal mesh, with which we obtain multiple affine matrices to warp images. Furthermore, we synthetically consider alignment error, color difference and saliency difference to find the optimal seam for image blending. Experiments both on common data sets and challenging surveillance scenes illustrate the effectiveness of the proposed method, and our approach has outstanding performance when compared with other state-of-the-art methods.
针对低纹理的宽基线图像,提出了一种新的拼接方法。首先,采用三相特征匹配模型提取丰富可靠的特征匹配,在纹理较低的情况下,我们的线匹配和轮廓匹配将弥补点匹配质量较差的缺陷。然后,通过定义若干约束条件和最小化目标函数来求解最优网格,从而得到多个仿射矩阵进行图像翘曲。在此基础上,综合考虑对齐误差、色差和显著性差等因素,寻找图像融合的最佳接缝。在常见数据集和具有挑战性的监控场景上的实验都证明了所提出方法的有效性,并且与其他最先进的方法相比,我们的方法具有出色的性能。
{"title":"Wide Baseline Image Stitching with Structure-Preserving","authors":"Mingjun Cao, Wei Lyu, Zhong Zhou, Wei Wu","doi":"10.1109/ICVRV.2017.00050","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00050","url":null,"abstract":"This paper presents a novel stitching approach for wide-baseline images with low texture. Firstly, a three-phase feature matching model is applied to extract rich and reliable feature matching, in the case of low texture, our line matching and contour matching will compensate for the poor quality of point matching. Then, a structure-preserving warping is performed, by defining several constraints and minimizing the objective function to solve the optimal mesh, with which we obtain multiple affine matrices to warp images. Furthermore, we synthetically consider alignment error, color difference and saliency difference to find the optimal seam for image blending. Experiments both on common data sets and challenging surveillance scenes illustrate the effectiveness of the proposed method, and our approach has outstanding performance when compared with other state-of-the-art methods.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126005358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Thangka Image Scene Switching Based on VR 基于VR的唐卡图像场景切换研究
Pub Date : 2017-10-01 DOI: 10.1109/icvrv.2017.00103
Jianbang Jia, Chuan-qian Tang, Shou-Liang Tang, Huan Wu, Xiaojing Liu, Zhiqiang Liu
With the development of computer simulation technology and computer graphics, virtual reality (VR) has become the hotspot and difficulty in the current world research. This paper embarks from the actual and presents a Thangka image browsing research based on VR. The second order gradient enhancement of Sobel operator algorithm, maximum entropy segmentation algorithm, the most gray value segmentation algorithm and point to linear symmetry method are used to realize the VR-based Thangka image scene switching. Experimental results show that the processing time obtained through Leap Motion is 20-30 ms/frame, and the accuracy of rigid body region detection is more than 70%. It can basically meet the requirements of real-time and accurate handoff of Thangka image scene.
随着计算机仿真技术和计算机图形学的发展,虚拟现实(VR)已成为当今世界研究的热点和难点。本文从实际出发,提出了一种基于VR的唐卡图像浏览研究。采用Sobel算子二阶梯度增强算法、最大熵分割算法、最大灰度值分割算法和点对线对称法实现了基于vr的唐卡图像场景切换。实验结果表明,通过Leap Motion获得的处理时间为20 ~ 30 ms/帧,刚体区域检测精度达70%以上。基本能满足唐卡图像场景实时、准确切换的要求。
{"title":"Research on Thangka Image Scene Switching Based on VR","authors":"Jianbang Jia, Chuan-qian Tang, Shou-Liang Tang, Huan Wu, Xiaojing Liu, Zhiqiang Liu","doi":"10.1109/icvrv.2017.00103","DOIUrl":"https://doi.org/10.1109/icvrv.2017.00103","url":null,"abstract":"With the development of computer simulation technology and computer graphics, virtual reality (VR) has become the hotspot and difficulty in the current world research. This paper embarks from the actual and presents a Thangka image browsing research based on VR. The second order gradient enhancement of Sobel operator algorithm, maximum entropy segmentation algorithm, the most gray value segmentation algorithm and point to linear symmetry method are used to realize the VR-based Thangka image scene switching. Experimental results show that the processing time obtained through Leap Motion is 20-30 ms/frame, and the accuracy of rigid body region detection is more than 70%. It can basically meet the requirements of real-time and accurate handoff of Thangka image scene.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"89 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129752675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mixed Reality Application: A Framework of Markerless Assembly Guidance System with Hololens Glass 混合现实应用:基于全息玻璃的无标记装配制导系统框架
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00110
Zhu Teng, He Hanwu, Wu Yueming, Chen He-en, Chen Yongbin
We proposed a markerless MR guidance system for the manufactory assembly, which utilizes augmented reality device and camera sensor to display virtual model in designated position in the real world. By taking advantage of image processing methods. The system can automatically detect the location of device and target. The application result in real manufactory scene shows that the guidance system performs well and can track the changing of the posture of target products in less than 200ms, and then adjust the virtual models into right position coordinately.
提出了一种用于制造装配的无标记MR制导系统,该系统利用增强现实设备和相机传感器在现实世界中指定位置显示虚拟模型。利用图像处理的方法。该系统能够自动检测设备和目标的位置。在实际生产场景中的应用结果表明,该制导系统性能良好,能在不到200ms的时间内跟踪目标产品的姿态变化,并将虚拟模型协调调整到正确位置。
{"title":"Mixed Reality Application: A Framework of Markerless Assembly Guidance System with Hololens Glass","authors":"Zhu Teng, He Hanwu, Wu Yueming, Chen He-en, Chen Yongbin","doi":"10.1109/ICVRV.2017.00110","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00110","url":null,"abstract":"We proposed a markerless MR guidance system for the manufactory assembly, which utilizes augmented reality device and camera sensor to display virtual model in designated position in the real world. By taking advantage of image processing methods. The system can automatically detect the location of device and target. The application result in real manufactory scene shows that the guidance system performs well and can track the changing of the posture of target products in less than 200ms, and then adjust the virtual models into right position coordinately.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"86 22","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131878565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Research on Flexible Mapping Among Multiple Gestures and One Semantic in Intelligent Teaching Interface 智能教学界面中多手势一语义的灵活映射研究
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00093
Yu Qiao, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Tao Xu, Xiaoyan Zhou
In recent years, gesture-based interaction has been become a research hotspot. In this paper, we focus on design and implementation of gesture-based intelligent teaching interface, which can interaction with teacher by obtaining the teacher's hand gesture information through the Kinect. There are two problems in the process of interaction between the teacher and the intelligent teaching interface. One is the intelligent teaching interface does not respond properly to the interactive command, and the other one is the intelligent teaching interface does not respond to the interactive command. So, flexible mapping among multiple gestures and one semantic model (FMGS) at same context was proposed. Experiments show that the FMGS can solve the above two problems and effectively reduce the user's cognitive load.
近年来,基于手势的交互已成为一个研究热点。本文主要研究基于手势的智能教学界面的设计与实现,该界面通过Kinect获取教师的手势信息,实现与教师的交互。在教师与智能教学界面的交互过程中存在两个问题。一种是智能教学界面对交互命令响应不正常,另一种是智能教学界面对交互命令响应不正常。为此,提出了在同一语境下多个手势和一个语义模型之间的灵活映射。实验表明,FMGS可以很好地解决上述两个问题,有效地降低了用户的认知负荷。
{"title":"Research on Flexible Mapping Among Multiple Gestures and One Semantic in Intelligent Teaching Interface","authors":"Yu Qiao, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Tao Xu, Xiaoyan Zhou","doi":"10.1109/ICVRV.2017.00093","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00093","url":null,"abstract":"In recent years, gesture-based interaction has been become a research hotspot. In this paper, we focus on design and implementation of gesture-based intelligent teaching interface, which can interaction with teacher by obtaining the teacher's hand gesture information through the Kinect. There are two problems in the process of interaction between the teacher and the intelligent teaching interface. One is the intelligent teaching interface does not respond properly to the interactive command, and the other one is the intelligent teaching interface does not respond to the interactive command. So, flexible mapping among multiple gestures and one semantic model (FMGS) at same context was proposed. Experiments show that the FMGS can solve the above two problems and effectively reduce the user's cognitive load.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133515067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2017 International Conference on Virtual Reality and Visualization (ICVRV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1