The application of immersive virtual maintenance technology can find the problems of the products in the process of design, guaranteeing the qualities and reducing the life-cycle cost. As an indispensable part in immersive virtual maintenance, motion control for virtual human is a critical factor for promoting simulation efficiency. But it still remains on basis of motion editing or images, causing it a time-consuming task to simulate the maintenance process. Therefore, we propose a real time motion control algorithm based on optical motion capture, making the virtual maintenance both immersive and efficient. To ensure the algorithm fast enough, an editable human model is constructed based on simplified human skeleton. In order to make the virtual human work in an unlimited range while the simulation worker moves in a limited space, a walking gesture is defined and a prototype of virtual human's action database is built. To obtain continuous visual effects, the view direction of the virtual human has been smoothed by adopting a gyroscope. Finally, the algorithm and its effectiveness have been proven by experiments.
{"title":"Motion Control of Virtual Human Based on Optical Motion Capture in Immersive Virtual Maintenance System","authors":"Chen Shanmin, Ning Tao, Wang Ke","doi":"10.1109/ICVRV.2011.24","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.24","url":null,"abstract":"The application of immersive virtual maintenance technology can find the problems of the products in the process of design, guaranteeing the qualities and reducing the life-cycle cost. As an indispensable part in immersive virtual maintenance, motion control for virtual human is a critical factor for promoting simulation efficiency. But it still remains on basis of motion editing or images, causing it a time-consuming task to simulate the maintenance process. Therefore, we propose a real time motion control algorithm based on optical motion capture, making the virtual maintenance both immersive and efficient. To ensure the algorithm fast enough, an editable human model is constructed based on simplified human skeleton. In order to make the virtual human work in an unlimited range while the simulation worker moves in a limited space, a walking gesture is defined and a prototype of virtual human's action database is built. To obtain continuous visual effects, the view direction of the virtual human has been smoothed by adopting a gyroscope. Finally, the algorithm and its effectiveness have been proven by experiments.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132184553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a discriminative visual object contour tracking algorithm using multi-cue fusion particle filter. A novel contour evolution energy is designed by integrating an incremental learning discriminative model into the parametric snake model, and such energy function is combined with a mixed cascade particle filter tracking algorithm fusing multiple observation models for accurate object contour tracking. In the proposed multi-cue fusion particle filter method, the incremental learning discriminative model is used to create observation model on appearance of the object, while the bending energy, calculated by the thin plate spline (TPS) model with multiple order graph matching between contours in two consecutive frames, together with the energy achieved from the contour evolution process, are both taken as observation models on contour deformation. Dealing with these multiple observation models, a mixed cascade important sampling process is adopted to fuse these observations efficiently. Besides, the dynamic model used in the tracking method is also improved by using the optical flow. Experiments on real videos show that our approach highly improves the performance of the object contour tracking.
{"title":"Multi-cue Based Discriminative Visual Object Contour Tracking","authors":"Wang Aiping, Chen Zhiquan, Li Sikun","doi":"10.1109/ICVRV.2011.52","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.52","url":null,"abstract":"This paper proposes a discriminative visual object contour tracking algorithm using multi-cue fusion particle filter. A novel contour evolution energy is designed by integrating an incremental learning discriminative model into the parametric snake model, and such energy function is combined with a mixed cascade particle filter tracking algorithm fusing multiple observation models for accurate object contour tracking. In the proposed multi-cue fusion particle filter method, the incremental learning discriminative model is used to create observation model on appearance of the object, while the bending energy, calculated by the thin plate spline (TPS) model with multiple order graph matching between contours in two consecutive frames, together with the energy achieved from the contour evolution process, are both taken as observation models on contour deformation. Dealing with these multiple observation models, a mixed cascade important sampling process is adopted to fuse these observations efficiently. Besides, the dynamic model used in the tracking method is also improved by using the optical flow. Experiments on real videos show that our approach highly improves the performance of the object contour tracking.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130841931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Texture feature is a measure method about relationship among the pixels in local area, reflecting the changes of image space gray levels. This paper presents a texture feature extraction method based on regional average binary gray level difference co-occurrence matrix, which combined the texture structural analysis method with statistical method. Firstly, we calculate the average binary gray level difference of eight-neighbors of a pixel to get the average binary gray level difference image which expresses the variation pattern of the regional gray levels. Secondly, the regional co-occurrence matrix is constructed by using these average binary gray level differences. Finally, we extract the second-order statistic parameters reflecting the image texture feature from the regional co-occurrence matrix. Theoretical analysis and experimental results show that the image texture feature extraction method has certain accuracy and validity.
{"title":"Image Texture Feature Extraction Method Based on Regional Average Binary Gray Level Difference Co-occurrence Matrix","authors":"Jian Yang, Jingfeng Guo","doi":"10.1109/ICVRV.2011.20","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.20","url":null,"abstract":"Texture feature is a measure method about relationship among the pixels in local area, reflecting the changes of image space gray levels. This paper presents a texture feature extraction method based on regional average binary gray level difference co-occurrence matrix, which combined the texture structural analysis method with statistical method. Firstly, we calculate the average binary gray level difference of eight-neighbors of a pixel to get the average binary gray level difference image which expresses the variation pattern of the regional gray levels. Secondly, the regional co-occurrence matrix is constructed by using these average binary gray level differences. Finally, we extract the second-order statistic parameters reflecting the image texture feature from the regional co-occurrence matrix. Theoretical analysis and experimental results show that the image texture feature extraction method has certain accuracy and validity.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130024429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GPU-based volume ray-casting can provide high performance for interactive medical visualization. The more samples we take along rays, i.e., a higher sampling rate, the more accurately we can represent the volume data, especially when the combined frequency of the volume and transfer function is high. However, this will reduce the rendering performance considerably because more samples mean more time-consuming memory access on GPU. In this paper, we propose an effective volume ray-casting algorithm which can perform more samplings within a ray segment using cubic B-spline. This can improve the sampling rate and offer high quality images without obvious performance degradation. Besides, our algorithm does not have to adjust anything else at all. This fact guarantees its flexibility and simplicity. We exploit the new programming interface CUDA to implement ray-casting rather than conventional fragment shader. Experimental results prove this method can be used as an effective medical visualization tool.
{"title":"CUDA-Based Volume Ray-Casting Using Cubic B-spline","authors":"Changgong Zhang, P. Xi, C. Zhang","doi":"10.1109/ICVRV.2011.10","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.10","url":null,"abstract":"GPU-based volume ray-casting can provide high performance for interactive medical visualization. The more samples we take along rays, i.e., a higher sampling rate, the more accurately we can represent the volume data, especially when the combined frequency of the volume and transfer function is high. However, this will reduce the rendering performance considerably because more samples mean more time-consuming memory access on GPU. In this paper, we propose an effective volume ray-casting algorithm which can perform more samplings within a ray segment using cubic B-spline. This can improve the sampling rate and offer high quality images without obvious performance degradation. Besides, our algorithm does not have to adjust anything else at all. This fact guarantees its flexibility and simplicity. We exploit the new programming interface CUDA to implement ray-casting rather than conventional fragment shader. Experimental results prove this method can be used as an effective medical visualization tool.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131005515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enya Shen, Huaxun Xu, Wenke Wang, Xun Cai, L. Zeng, Sikun Li
Feature visualization plays an important role in visualization of complicated flows because it can highlight the feature of the flows with a simplified representation. The traditional feature visualization methods may exact some important features in flow field imprecisely due to the lack of the knowledge and the experience of the user. This paper gives a particle-based visualization system which is developed with the application of the interactive fuzzy feature extraction and interactive visual analysis theories. To obtain a more precise feature extraction, we have proposed an interactive fuzzy feature description language (FFDL) and an interactive fuzzy feature extraction algorithm. Based on the work before, we introduced the proportion ration for different rules and optimized our algorithm in practice further by communicating with specific researchers and doing lots of experiments. The further experiments show that our method can not only make full use of the ability of the user to extract the features precisely, but also reflect the uncertainty of the numerical simulation data.
{"title":"Interactive Visual Analysis of Vortex in 3D Flow with FFDL","authors":"Enya Shen, Huaxun Xu, Wenke Wang, Xun Cai, L. Zeng, Sikun Li","doi":"10.1109/ICVRV.2011.30","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.30","url":null,"abstract":"Feature visualization plays an important role in visualization of complicated flows because it can highlight the feature of the flows with a simplified representation. The traditional feature visualization methods may exact some important features in flow field imprecisely due to the lack of the knowledge and the experience of the user. This paper gives a particle-based visualization system which is developed with the application of the interactive fuzzy feature extraction and interactive visual analysis theories. To obtain a more precise feature extraction, we have proposed an interactive fuzzy feature description language (FFDL) and an interactive fuzzy feature extraction algorithm. Based on the work before, we introduced the proportion ration for different rules and optimized our algorithm in practice further by communicating with specific researchers and doing lots of experiments. The further experiments show that our method can not only make full use of the ability of the user to extract the features precisely, but also reflect the uncertainty of the numerical simulation data.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134021851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the desktop virtual maintenance system of certain weapon, the whole interaction process is divided into pickup, drag and release three phases with common interaction devices as 2-d mouse and keyboard. As the mouse only consists of 2-d screen coordinate information and the virtual entities have 3-d world coordinate, it is necessary to put up a mapping between screen coordinate system and 3-d world coordinate system. Based on the mapping, the key technologies of each phase are given including acupuncture pickup method, rough area judgment method, drag and release control methods etc. The methods of pickup, drag and release of 3-d virtual entities are applied to certain weapon desktop virtual maintenance system, and it is to be proved both the performance of real-time and the sense of reality are increased effectively.
{"title":"Research on Interaction Technologies in Desktop Virtual Maintenance System of Certain Weapon","authors":"Liu Pengyuan, Ma Long, Li Ruihua","doi":"10.1109/ICVRV.2011.27","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.27","url":null,"abstract":"In the desktop virtual maintenance system of certain weapon, the whole interaction process is divided into pickup, drag and release three phases with common interaction devices as 2-d mouse and keyboard. As the mouse only consists of 2-d screen coordinate information and the virtual entities have 3-d world coordinate, it is necessary to put up a mapping between screen coordinate system and 3-d world coordinate system. Based on the mapping, the key technologies of each phase are given including acupuncture pickup method, rough area judgment method, drag and release control methods etc. The methods of pickup, drag and release of 3-d virtual entities are applied to certain weapon desktop virtual maintenance system, and it is to be proved both the performance of real-time and the sense of reality are increased effectively.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132156126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A SERIES MODELS FOR RADAR DETECTION RANGE UNDER COMPLEX ELECTROMAGNETIC ENVIRONMENT WERE ESTABLISHED, INCLUDING ANTENNA GAIN, PROPAGATION IN MULTI-PATH, ATTENUATION, CLUTTERS OF RAINFALL AND SEA SURFACE, AND ACTIVE ELECTRICAL JAMMING. RADAR RANGE SIMULATION WITH VISUALIZATION IS IMPLEMENTED AND PROVIDES DIRECT IMAGE FOR TACTICAL DECISION.
{"title":"Modeling and Simulation on Radar Detection Range under Complex Electromagnetic Environment","authors":"Xiao Bin, Sun Chunsheng","doi":"10.1109/ICVRV.2011.55","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.55","url":null,"abstract":"A SERIES MODELS FOR RADAR DETECTION RANGE UNDER COMPLEX ELECTROMAGNETIC ENVIRONMENT WERE ESTABLISHED, INCLUDING ANTENNA GAIN, PROPAGATION IN MULTI-PATH, ATTENUATION, CLUTTERS OF RAINFALL AND SEA SURFACE, AND ACTIVE ELECTRICAL JAMMING. RADAR RANGE SIMULATION WITH VISUALIZATION IS IMPLEMENTED AND PROVIDES DIRECT IMAGE FOR TACTICAL DECISION.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129900239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}