Based on the Kinect 2.0, 17 kinds of static gesture libraries were established and trained by Convolutional Neural Network. A lot of statistical experiments have been done on the classification of each gesture. During the experiment, we found a phenomenon that several gestures in the 17 gestures were easily confused. And for the sake of description, we call these gestures as similarity gestures. It is assumed that the test result of convolutional neural network model satisfies the large number theorem from the angle of large data. Therefore, For misjudgment gestures, this paper presents a recognition method based on probability statistics.
{"title":"A Recognition Method of Misjudgment Gesture Based on Convolutional Neural Network","authors":"Kaiyun Sun, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Xiaopei Guo","doi":"10.1109/ICVRV.2017.00062","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00062","url":null,"abstract":"Based on the Kinect 2.0, 17 kinds of static gesture libraries were established and trained by Convolutional Neural Network. A lot of statistical experiments have been done on the classification of each gesture. During the experiment, we found a phenomenon that several gestures in the 17 gestures were easily confused. And for the sake of description, we call these gestures as similarity gestures. It is assumed that the test result of convolutional neural network model satisfies the large number theorem from the angle of large data. Therefore, For misjudgment gestures, this paper presents a recognition method based on probability statistics.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of computer simulation technology and computer graphics, virtual reality (VR) has become the hotspot and difficulty in the current world research. This paper embarks from the actual and presents a Thangka image browsing research based on VR. The second order gradient enhancement of Sobel operator algorithm, maximum entropy segmentation algorithm, the most gray value segmentation algorithm and point to linear symmetry method are used to realize the VR-based Thangka image scene switching. Experimental results show that the processing time obtained through Leap Motion is 20-30 ms/frame, and the accuracy of rigid body region detection is more than 70%. It can basically meet the requirements of real-time and accurate handoff of Thangka image scene.
{"title":"Research on Thangka Image Scene Switching Based on VR","authors":"Jianbang Jia, Chuan-qian Tang, Shou-Liang Tang, Huan Wu, Xiaojing Liu, Zhiqiang Liu","doi":"10.1109/icvrv.2017.00103","DOIUrl":"https://doi.org/10.1109/icvrv.2017.00103","url":null,"abstract":"With the development of computer simulation technology and computer graphics, virtual reality (VR) has become the hotspot and difficulty in the current world research. This paper embarks from the actual and presents a Thangka image browsing research based on VR. The second order gradient enhancement of Sobel operator algorithm, maximum entropy segmentation algorithm, the most gray value segmentation algorithm and point to linear symmetry method are used to realize the VR-based Thangka image scene switching. Experimental results show that the processing time obtained through Leap Motion is 20-30 ms/frame, and the accuracy of rigid body region detection is more than 70%. It can basically meet the requirements of real-time and accurate handoff of Thangka image scene.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"89 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129752675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00050
Mingjun Cao, Wei Lyu, Zhong Zhou, Wei Wu
This paper presents a novel stitching approach for wide-baseline images with low texture. Firstly, a three-phase feature matching model is applied to extract rich and reliable feature matching, in the case of low texture, our line matching and contour matching will compensate for the poor quality of point matching. Then, a structure-preserving warping is performed, by defining several constraints and minimizing the objective function to solve the optimal mesh, with which we obtain multiple affine matrices to warp images. Furthermore, we synthetically consider alignment error, color difference and saliency difference to find the optimal seam for image blending. Experiments both on common data sets and challenging surveillance scenes illustrate the effectiveness of the proposed method, and our approach has outstanding performance when compared with other state-of-the-art methods.
{"title":"Wide Baseline Image Stitching with Structure-Preserving","authors":"Mingjun Cao, Wei Lyu, Zhong Zhou, Wei Wu","doi":"10.1109/ICVRV.2017.00050","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00050","url":null,"abstract":"This paper presents a novel stitching approach for wide-baseline images with low texture. Firstly, a three-phase feature matching model is applied to extract rich and reliable feature matching, in the case of low texture, our line matching and contour matching will compensate for the poor quality of point matching. Then, a structure-preserving warping is performed, by defining several constraints and minimizing the objective function to solve the optimal mesh, with which we obtain multiple affine matrices to warp images. Furthermore, we synthetically consider alignment error, color difference and saliency difference to find the optimal seam for image blending. Experiments both on common data sets and challenging surveillance scenes illustrate the effectiveness of the proposed method, and our approach has outstanding performance when compared with other state-of-the-art methods.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126005358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the rapid development of computer graphics and network, virtual reality technology begin to penetrate the field of social network. Due to the limited function of single virtual reality device, the combination of different devices is a feasible approach to immerse people in the real world. In this paper, we novelly combine Kinect v1 with Leap Motion sensor for whole-body gesture capturing and overcome two difficulties, avatars' skeleton connection and movement data synchronization. The experiment approves that our method performs well. It could be a meaningful contribution to the future multi-player interactive virtual social platform.
{"title":"Avatars' Skeleton Connection and Movement Data Network Synchronization","authors":"Q. Qi, Sanyuan Zhao, Shuai Wang, Linjing Lai, Zhengchao Lei, Hongmei Song","doi":"10.1109/ICVRV.2017.00096","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00096","url":null,"abstract":"As the rapid development of computer graphics and network, virtual reality technology begin to penetrate the field of social network. Due to the limited function of single virtual reality device, the combination of different devices is a feasible approach to immerse people in the real world. In this paper, we novelly combine Kinect v1 with Leap Motion sensor for whole-body gesture capturing and overcome two difficulties, avatars' skeleton connection and movement data synchronization. The experiment approves that our method performs well. It could be a meaningful contribution to the future multi-player interactive virtual social platform.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"448 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125779842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00092
Mingjing Ai, Baohe Chen, Qunfang Yang
In this paper, we propose the artificial viscosity relaxation (AVR) model based on the SPH method to simulate the viscosity of the fluid. This model modifies the velocity of adjacent particle pairs by introducing a velocity relaxation amount, thus realizing the update of velocity and simulating the motion of fluid. We also apply the improved method to realize the complete process of the solid melting. And as the experiment results shows, the proposed method greatly simplifies the calculation and reduces the calculation amount, and it can reach higher frame rate in the case of the same number of particles.
{"title":"Real-Time Viscoelastic Fluid Simulation and Solid Melting Process Based on AVR-SPH","authors":"Mingjing Ai, Baohe Chen, Qunfang Yang","doi":"10.1109/ICVRV.2017.00092","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00092","url":null,"abstract":"In this paper, we propose the artificial viscosity relaxation (AVR) model based on the SPH method to simulate the viscosity of the fluid. This model modifies the velocity of adjacent particle pairs by introducing a velocity relaxation amount, thus realizing the update of velocity and simulating the motion of fluid. We also apply the improved method to realize the complete process of the solid melting. And as the experiment results shows, the proposed method greatly simplifies the calculation and reduces the calculation amount, and it can reach higher frame rate in the case of the same number of particles.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125972489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/icvrv.2017.00028
Tianhao Gao, Wencheng Wang, B. Zhu
High quality mesh segmentation depends on high quality cuts. Unfortunately, the cuts produced by existing methods are not very satisfactory, since their global measurements tend to ignore effects of local features, while their local measurements would enlarge the influences from facet details by error accumulation. We observe that the cuts preferred to by human beings are much more dependent on the overall characteristics of local regions, a kind of intermediate-level features, especially in concave regions. Thus, we present a construct to enhance representation of overall characteristics in concave regions for improving cut initialization in concave regions, and design novel energy functions, mainly by intermediate-level features, for extending cutting lines to be enclosed. Then, based on the obtained closed cutting lines, we perform meaningful mesh segmentation in a bottom-up manner according to application requirements. In comparison with state-of-the-art methods, we can have cuts produced more preferred to by human beings, as shown by the experimental results on a benchmark.
{"title":"Improved Mesh Segmentation with Perception-Aware Cuts","authors":"Tianhao Gao, Wencheng Wang, B. Zhu","doi":"10.1109/icvrv.2017.00028","DOIUrl":"https://doi.org/10.1109/icvrv.2017.00028","url":null,"abstract":"High quality mesh segmentation depends on high quality cuts. Unfortunately, the cuts produced by existing methods are not very satisfactory, since their global measurements tend to ignore effects of local features, while their local measurements would enlarge the influences from facet details by error accumulation. We observe that the cuts preferred to by human beings are much more dependent on the overall characteristics of local regions, a kind of intermediate-level features, especially in concave regions. Thus, we present a construct to enhance representation of overall characteristics in concave regions for improving cut initialization in concave regions, and design novel energy functions, mainly by intermediate-level features, for extending cutting lines to be enclosed. Then, based on the obtained closed cutting lines, we perform meaningful mesh segmentation in a bottom-up manner according to application requirements. In comparison with state-of-the-art methods, we can have cuts produced more preferred to by human beings, as shown by the experimental results on a benchmark.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128181800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00067
Honglei Han, Aidong Lu, U. Wells
A method to measure what and how deep the user can perceive in immersive virtual reality environments is proposed. A preliminary user study was carried out to verify that user gaze behaviors have some specific differences in immersive virtual reality environments compared with that in traditional non-immersive virtual reality environments base on 2D monitors and interactive hardware. Analyzed from the user study result, the user gaze behavior in immersive virtual reality environments is more likely to move their head to let interested object locates in the center of the view, while in non-immersive virtual reality environments the user tends to move their own eyes and only move the avatar's head when necessary. Base on this finding, a quantitative equation is proposed to measure the user's attention in immersive virtual reality environments. It can be used into a quality evaluate system to help designers find out design issues in the scene that reduce the effectiveness of the narrative.
{"title":"Under the Movement of Head: Evaluating Visual Attention in Immersive Virtual Reality Environment","authors":"Honglei Han, Aidong Lu, U. Wells","doi":"10.1109/ICVRV.2017.00067","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00067","url":null,"abstract":"A method to measure what and how deep the user can perceive in immersive virtual reality environments is proposed. A preliminary user study was carried out to verify that user gaze behaviors have some specific differences in immersive virtual reality environments compared with that in traditional non-immersive virtual reality environments base on 2D monitors and interactive hardware. Analyzed from the user study result, the user gaze behavior in immersive virtual reality environments is more likely to move their head to let interested object locates in the center of the view, while in non-immersive virtual reality environments the user tends to move their own eyes and only move the avatar's head when necessary. Base on this finding, a quantitative equation is proposed to measure the user's attention in immersive virtual reality environments. It can be used into a quality evaluate system to help designers find out design issues in the scene that reduce the effectiveness of the narrative.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124363840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00076
Jing-jing Lian, Xiao Yang
RANS simulation of the flow past the KCS Container Ship with the prescribed direct motion is performed. The commercial CFD solver FLUENT is employed to compute the RANS equations. The RNG Turbulent model is adopted in the computation. The SIMPLE algorithm is used to couple the velocity and pressure in the governing equations. The ship direct motions under different Froude numbers are simulated to obtain the flow field around ship and ship resistance. The computational results with and without the free surface are obtained respectively. The validations are presented through comparing the numerical results with experimental results of MOERI.
{"title":"Research on Hydrodynamic Forces of KCS Container Ship Based on Numerical Analysis","authors":"Jing-jing Lian, Xiao Yang","doi":"10.1109/ICVRV.2017.00076","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00076","url":null,"abstract":"RANS simulation of the flow past the KCS Container Ship with the prescribed direct motion is performed. The commercial CFD solver FLUENT is employed to compute the RANS equations. The RNG Turbulent model is adopted in the computation. The SIMPLE algorithm is used to couple the velocity and pressure in the governing equations. The ship direct motions under different Froude numbers are simulated to obtain the flow field around ship and ship resistance. The computational results with and without the free surface are obtained respectively. The validations are presented through comparing the numerical results with experimental results of MOERI.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114625051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00110
Zhu Teng, He Hanwu, Wu Yueming, Chen He-en, Chen Yongbin
We proposed a markerless MR guidance system for the manufactory assembly, which utilizes augmented reality device and camera sensor to display virtual model in designated position in the real world. By taking advantage of image processing methods. The system can automatically detect the location of device and target. The application result in real manufactory scene shows that the guidance system performs well and can track the changing of the posture of target products in less than 200ms, and then adjust the virtual models into right position coordinately.
{"title":"Mixed Reality Application: A Framework of Markerless Assembly Guidance System with Hololens Glass","authors":"Zhu Teng, He Hanwu, Wu Yueming, Chen He-en, Chen Yongbin","doi":"10.1109/ICVRV.2017.00110","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00110","url":null,"abstract":"We proposed a markerless MR guidance system for the manufactory assembly, which utilizes augmented reality device and camera sensor to display virtual model in designated position in the real world. By taking advantage of image processing methods. The system can automatically detect the location of device and target. The application result in real manufactory scene shows that the guidance system performs well and can track the changing of the posture of target products in less than 200ms, and then adjust the virtual models into right position coordinately.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"86 22","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131878565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00064
Yang Wenzhen, Dong Lujie, Yan Ming, Wu Xinli, Jiang Zhaona, Pan Zhigeng
Manipulating virtual objects using our real hands is a great challenge for the virtual reality community. We present a master-slave hand system to naturally manipulate virtual objects with a user's hand. The master-slave hand system can obtain the position, orientation and finger joint angle of the user's hand, which is used to drive a dexterous virtual hand to interact with virtual environments. The dexterous virtual hand we modeled has analogous motion function of the real hand. Simplified virtual hand manipulation intentions we defined help to the dexterous virtual hand manipulating virtual objects conveniently. A virtual assembly system prototype validates that this master-slave hand system attains intuitive and flexible hands-on interaction with virtual environments.
{"title":"A Master-Slave Hand System for Virtual Reality Interaction","authors":"Yang Wenzhen, Dong Lujie, Yan Ming, Wu Xinli, Jiang Zhaona, Pan Zhigeng","doi":"10.1109/ICVRV.2017.00064","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00064","url":null,"abstract":"Manipulating virtual objects using our real hands is a great challenge for the virtual reality community. We present a master-slave hand system to naturally manipulate virtual objects with a user's hand. The master-slave hand system can obtain the position, orientation and finger joint angle of the user's hand, which is used to drive a dexterous virtual hand to interact with virtual environments. The dexterous virtual hand we modeled has analogous motion function of the real hand. Simplified virtual hand manipulation intentions we defined help to the dexterous virtual hand manipulating virtual objects conveniently. A virtual assembly system prototype validates that this master-slave hand system attains intuitive and flexible hands-on interaction with virtual environments.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134634412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}