Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00067
Honglei Han, Aidong Lu, U. Wells
A method to measure what and how deep the user can perceive in immersive virtual reality environments is proposed. A preliminary user study was carried out to verify that user gaze behaviors have some specific differences in immersive virtual reality environments compared with that in traditional non-immersive virtual reality environments base on 2D monitors and interactive hardware. Analyzed from the user study result, the user gaze behavior in immersive virtual reality environments is more likely to move their head to let interested object locates in the center of the view, while in non-immersive virtual reality environments the user tends to move their own eyes and only move the avatar's head when necessary. Base on this finding, a quantitative equation is proposed to measure the user's attention in immersive virtual reality environments. It can be used into a quality evaluate system to help designers find out design issues in the scene that reduce the effectiveness of the narrative.
{"title":"Under the Movement of Head: Evaluating Visual Attention in Immersive Virtual Reality Environment","authors":"Honglei Han, Aidong Lu, U. Wells","doi":"10.1109/ICVRV.2017.00067","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00067","url":null,"abstract":"A method to measure what and how deep the user can perceive in immersive virtual reality environments is proposed. A preliminary user study was carried out to verify that user gaze behaviors have some specific differences in immersive virtual reality environments compared with that in traditional non-immersive virtual reality environments base on 2D monitors and interactive hardware. Analyzed from the user study result, the user gaze behavior in immersive virtual reality environments is more likely to move their head to let interested object locates in the center of the view, while in non-immersive virtual reality environments the user tends to move their own eyes and only move the avatar's head when necessary. Base on this finding, a quantitative equation is proposed to measure the user's attention in immersive virtual reality environments. It can be used into a quality evaluate system to help designers find out design issues in the scene that reduce the effectiveness of the narrative.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124363840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an efficient image collage approach called TrCollage using tree-based layer reordering, which takes into account not only the efficiency of the collage processing but also the quality of the final collage effects. Besides, our tree-based TrCollage also meets the enhanced demands from the rapid development of mobile technology, which call for robust solutions for efficient picture collage without computation-intensive processing, e.g. graph-cut, saliency detection. The experimental results have shown the efficiency and effectiveness of our tree-based TrCollage with high-quality image collage effects applying layer reordering.
{"title":"Trcollage: Efficient Image Collage Using Tree-Based Layer Reordering","authors":"Shiguang Liu, Xiaobing Wang, Ping Li, Jun-yong Noh","doi":"10.1109/ICVRV.2017.00120","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00120","url":null,"abstract":"This paper proposes an efficient image collage approach called TrCollage using tree-based layer reordering, which takes into account not only the efficiency of the collage processing but also the quality of the final collage effects. Besides, our tree-based TrCollage also meets the enhanced demands from the rapid development of mobile technology, which call for robust solutions for efficient picture collage without computation-intensive processing, e.g. graph-cut, saliency detection. The experimental results have shown the efficiency and effectiveness of our tree-based TrCollage with high-quality image collage effects applying layer reordering.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117290489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/icvrv.2017.00028
Tianhao Gao, Wencheng Wang, B. Zhu
High quality mesh segmentation depends on high quality cuts. Unfortunately, the cuts produced by existing methods are not very satisfactory, since their global measurements tend to ignore effects of local features, while their local measurements would enlarge the influences from facet details by error accumulation. We observe that the cuts preferred to by human beings are much more dependent on the overall characteristics of local regions, a kind of intermediate-level features, especially in concave regions. Thus, we present a construct to enhance representation of overall characteristics in concave regions for improving cut initialization in concave regions, and design novel energy functions, mainly by intermediate-level features, for extending cutting lines to be enclosed. Then, based on the obtained closed cutting lines, we perform meaningful mesh segmentation in a bottom-up manner according to application requirements. In comparison with state-of-the-art methods, we can have cuts produced more preferred to by human beings, as shown by the experimental results on a benchmark.
{"title":"Improved Mesh Segmentation with Perception-Aware Cuts","authors":"Tianhao Gao, Wencheng Wang, B. Zhu","doi":"10.1109/icvrv.2017.00028","DOIUrl":"https://doi.org/10.1109/icvrv.2017.00028","url":null,"abstract":"High quality mesh segmentation depends on high quality cuts. Unfortunately, the cuts produced by existing methods are not very satisfactory, since their global measurements tend to ignore effects of local features, while their local measurements would enlarge the influences from facet details by error accumulation. We observe that the cuts preferred to by human beings are much more dependent on the overall characteristics of local regions, a kind of intermediate-level features, especially in concave regions. Thus, we present a construct to enhance representation of overall characteristics in concave regions for improving cut initialization in concave regions, and design novel energy functions, mainly by intermediate-level features, for extending cutting lines to be enclosed. Then, based on the obtained closed cutting lines, we perform meaningful mesh segmentation in a bottom-up manner according to application requirements. In comparison with state-of-the-art methods, we can have cuts produced more preferred to by human beings, as shown by the experimental results on a benchmark.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128181800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Based on the Kinect 2.0, 17 kinds of static gesture libraries were established and trained by Convolutional Neural Network. A lot of statistical experiments have been done on the classification of each gesture. During the experiment, we found a phenomenon that several gestures in the 17 gestures were easily confused. And for the sake of description, we call these gestures as similarity gestures. It is assumed that the test result of convolutional neural network model satisfies the large number theorem from the angle of large data. Therefore, For misjudgment gestures, this paper presents a recognition method based on probability statistics.
{"title":"A Recognition Method of Misjudgment Gesture Based on Convolutional Neural Network","authors":"Kaiyun Sun, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Xiaopei Guo","doi":"10.1109/ICVRV.2017.00062","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00062","url":null,"abstract":"Based on the Kinect 2.0, 17 kinds of static gesture libraries were established and trained by Convolutional Neural Network. A lot of statistical experiments have been done on the classification of each gesture. During the experiment, we found a phenomenon that several gestures in the 17 gestures were easily confused. And for the sake of description, we call these gestures as similarity gestures. It is assumed that the test result of convolutional neural network model satisfies the large number theorem from the angle of large data. Therefore, For misjudgment gestures, this paper presents a recognition method based on probability statistics.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the rapid development of computer graphics and network, virtual reality technology begin to penetrate the field of social network. Due to the limited function of single virtual reality device, the combination of different devices is a feasible approach to immerse people in the real world. In this paper, we novelly combine Kinect v1 with Leap Motion sensor for whole-body gesture capturing and overcome two difficulties, avatars' skeleton connection and movement data synchronization. The experiment approves that our method performs well. It could be a meaningful contribution to the future multi-player interactive virtual social platform.
{"title":"Avatars' Skeleton Connection and Movement Data Network Synchronization","authors":"Q. Qi, Sanyuan Zhao, Shuai Wang, Linjing Lai, Zhengchao Lei, Hongmei Song","doi":"10.1109/ICVRV.2017.00096","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00096","url":null,"abstract":"As the rapid development of computer graphics and network, virtual reality technology begin to penetrate the field of social network. Due to the limited function of single virtual reality device, the combination of different devices is a feasible approach to immerse people in the real world. In this paper, we novelly combine Kinect v1 with Leap Motion sensor for whole-body gesture capturing and overcome two difficulties, avatars' skeleton connection and movement data synchronization. The experiment approves that our method performs well. It could be a meaningful contribution to the future multi-player interactive virtual social platform.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"448 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125779842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00092
Mingjing Ai, Baohe Chen, Qunfang Yang
In this paper, we propose the artificial viscosity relaxation (AVR) model based on the SPH method to simulate the viscosity of the fluid. This model modifies the velocity of adjacent particle pairs by introducing a velocity relaxation amount, thus realizing the update of velocity and simulating the motion of fluid. We also apply the improved method to realize the complete process of the solid melting. And as the experiment results shows, the proposed method greatly simplifies the calculation and reduces the calculation amount, and it can reach higher frame rate in the case of the same number of particles.
{"title":"Real-Time Viscoelastic Fluid Simulation and Solid Melting Process Based on AVR-SPH","authors":"Mingjing Ai, Baohe Chen, Qunfang Yang","doi":"10.1109/ICVRV.2017.00092","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00092","url":null,"abstract":"In this paper, we propose the artificial viscosity relaxation (AVR) model based on the SPH method to simulate the viscosity of the fluid. This model modifies the velocity of adjacent particle pairs by introducing a velocity relaxation amount, thus realizing the update of velocity and simulating the motion of fluid. We also apply the improved method to realize the complete process of the solid melting. And as the experiment results shows, the proposed method greatly simplifies the calculation and reduces the calculation amount, and it can reach higher frame rate in the case of the same number of particles.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125972489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00050
Mingjun Cao, Wei Lyu, Zhong Zhou, Wei Wu
This paper presents a novel stitching approach for wide-baseline images with low texture. Firstly, a three-phase feature matching model is applied to extract rich and reliable feature matching, in the case of low texture, our line matching and contour matching will compensate for the poor quality of point matching. Then, a structure-preserving warping is performed, by defining several constraints and minimizing the objective function to solve the optimal mesh, with which we obtain multiple affine matrices to warp images. Furthermore, we synthetically consider alignment error, color difference and saliency difference to find the optimal seam for image blending. Experiments both on common data sets and challenging surveillance scenes illustrate the effectiveness of the proposed method, and our approach has outstanding performance when compared with other state-of-the-art methods.
{"title":"Wide Baseline Image Stitching with Structure-Preserving","authors":"Mingjun Cao, Wei Lyu, Zhong Zhou, Wei Wu","doi":"10.1109/ICVRV.2017.00050","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00050","url":null,"abstract":"This paper presents a novel stitching approach for wide-baseline images with low texture. Firstly, a three-phase feature matching model is applied to extract rich and reliable feature matching, in the case of low texture, our line matching and contour matching will compensate for the poor quality of point matching. Then, a structure-preserving warping is performed, by defining several constraints and minimizing the objective function to solve the optimal mesh, with which we obtain multiple affine matrices to warp images. Furthermore, we synthetically consider alignment error, color difference and saliency difference to find the optimal seam for image blending. Experiments both on common data sets and challenging surveillance scenes illustrate the effectiveness of the proposed method, and our approach has outstanding performance when compared with other state-of-the-art methods.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126005358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of computer simulation technology and computer graphics, virtual reality (VR) has become the hotspot and difficulty in the current world research. This paper embarks from the actual and presents a Thangka image browsing research based on VR. The second order gradient enhancement of Sobel operator algorithm, maximum entropy segmentation algorithm, the most gray value segmentation algorithm and point to linear symmetry method are used to realize the VR-based Thangka image scene switching. Experimental results show that the processing time obtained through Leap Motion is 20-30 ms/frame, and the accuracy of rigid body region detection is more than 70%. It can basically meet the requirements of real-time and accurate handoff of Thangka image scene.
{"title":"Research on Thangka Image Scene Switching Based on VR","authors":"Jianbang Jia, Chuan-qian Tang, Shou-Liang Tang, Huan Wu, Xiaojing Liu, Zhiqiang Liu","doi":"10.1109/icvrv.2017.00103","DOIUrl":"https://doi.org/10.1109/icvrv.2017.00103","url":null,"abstract":"With the development of computer simulation technology and computer graphics, virtual reality (VR) has become the hotspot and difficulty in the current world research. This paper embarks from the actual and presents a Thangka image browsing research based on VR. The second order gradient enhancement of Sobel operator algorithm, maximum entropy segmentation algorithm, the most gray value segmentation algorithm and point to linear symmetry method are used to realize the VR-based Thangka image scene switching. Experimental results show that the processing time obtained through Leap Motion is 20-30 ms/frame, and the accuracy of rigid body region detection is more than 70%. It can basically meet the requirements of real-time and accurate handoff of Thangka image scene.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"89 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129752675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00110
Zhu Teng, He Hanwu, Wu Yueming, Chen He-en, Chen Yongbin
We proposed a markerless MR guidance system for the manufactory assembly, which utilizes augmented reality device and camera sensor to display virtual model in designated position in the real world. By taking advantage of image processing methods. The system can automatically detect the location of device and target. The application result in real manufactory scene shows that the guidance system performs well and can track the changing of the posture of target products in less than 200ms, and then adjust the virtual models into right position coordinately.
{"title":"Mixed Reality Application: A Framework of Markerless Assembly Guidance System with Hololens Glass","authors":"Zhu Teng, He Hanwu, Wu Yueming, Chen He-en, Chen Yongbin","doi":"10.1109/ICVRV.2017.00110","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00110","url":null,"abstract":"We proposed a markerless MR guidance system for the manufactory assembly, which utilizes augmented reality device and camera sensor to display virtual model in designated position in the real world. By taking advantage of image processing methods. The system can automatically detect the location of device and target. The application result in real manufactory scene shows that the guidance system performs well and can track the changing of the posture of target products in less than 200ms, and then adjust the virtual models into right position coordinately.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"86 22","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131878565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/ICVRV.2017.00093
Yu Qiao, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Tao Xu, Xiaoyan Zhou
In recent years, gesture-based interaction has been become a research hotspot. In this paper, we focus on design and implementation of gesture-based intelligent teaching interface, which can interaction with teacher by obtaining the teacher's hand gesture information through the Kinect. There are two problems in the process of interaction between the teacher and the intelligent teaching interface. One is the intelligent teaching interface does not respond properly to the interactive command, and the other one is the intelligent teaching interface does not respond to the interactive command. So, flexible mapping among multiple gestures and one semantic model (FMGS) at same context was proposed. Experiments show that the FMGS can solve the above two problems and effectively reduce the user's cognitive load.
{"title":"Research on Flexible Mapping Among Multiple Gestures and One Semantic in Intelligent Teaching Interface","authors":"Yu Qiao, Zhiquan Feng, Changsheng Ai, Yingjun Li, Jun Wei, Xiaohui Yang, Tao Xu, Xiaoyan Zhou","doi":"10.1109/ICVRV.2017.00093","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00093","url":null,"abstract":"In recent years, gesture-based interaction has been become a research hotspot. In this paper, we focus on design and implementation of gesture-based intelligent teaching interface, which can interaction with teacher by obtaining the teacher's hand gesture information through the Kinect. There are two problems in the process of interaction between the teacher and the intelligent teaching interface. One is the intelligent teaching interface does not respond properly to the interactive command, and the other one is the intelligent teaching interface does not respond to the interactive command. So, flexible mapping among multiple gestures and one semantic model (FMGS) at same context was proposed. Experiments show that the FMGS can solve the above two problems and effectively reduce the user's cognitive load.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"246 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133515067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}