The integral image can be used to quickly complete common pixel-level operations in the regular region of the grey-level image. So it has been widely used in the field of computer vision and pattern recognition. In this paper, we firstly present an intuitive parallel method to compute the integral image. Then based on the intuitive method, a two-stage method based on the binary tree is introduced. In each stage of the algorithm, we do a firstly top-down and secondly bottom-up traversal over the tree. Finally, we analyze the case of large-scale grey-level image and optimize the computation based on the CUD Aarchitecture. We have done the experiment in the consumerlevel PC hardware which shows that the GPU-based algorithm outperforms the corresponded CPU-based algorithm in terms of speed in case of large-scale images.
{"title":"GPU-Based Computation of the Integral Image","authors":"Wei Huang, Ling-Da Wu, Yougen Zhang","doi":"10.1109/ICVRV.2011.43","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.43","url":null,"abstract":"The integral image can be used to quickly complete common pixel-level operations in the regular region of the grey-level image. So it has been widely used in the field of computer vision and pattern recognition. In this paper, we firstly present an intuitive parallel method to compute the integral image. Then based on the intuitive method, a two-stage method based on the binary tree is introduced. In each stage of the algorithm, we do a firstly top-down and secondly bottom-up traversal over the tree. Finally, we analyze the case of large-scale grey-level image and optimize the computation based on the CUD Aarchitecture. We have done the experiment in the consumerlevel PC hardware which shows that the GPU-based algorithm outperforms the corresponded CPU-based algorithm in terms of speed in case of large-scale images.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132543573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The visualization of large-scale time-varying data can provide scientists a more in-depth understanding of the inherent physical phenomena behind the massive data. However, because of non-uniform data access speed and memory capacity bottlenecks, the interactive rendering ability for large-scale time-varying data is still a major challenge. Data compression can alleviate these two bottlenecks. But just simply applying the data compression strategy to the visualization pipeline, the interaction problem can not be effectively solved because a lot of redundant data still existed in volume data. In this paper, a smart compression scheme based on the information theory is present to accelerate large-scale time-varying volume rendering. A formula of entropy was proposed, which can be used to automatically calculate the data importance to help scientists analyze and extract feature from the massive data. Then lossy data compression and data transfer is directly operated on these feature data, the remaining non-critical data was discarded in the process, and GPU ray-casting volume render is used for fast rendering. The experiment results shown that our smart compression scheme can reduce the amount of data as much as possible while maintaining the characteristics of the data, and therefore greatly improved the time-varying volume rendering speed even when dealing with the large scale time-varying data.
{"title":"A Smart Compression Scheme for GPU-Accelerated Volume Rendering of Time-Varying Data","authors":"Yi Cao, Guoqing Wu, Huawei Wang","doi":"10.1109/ICVRV.2011.56","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.56","url":null,"abstract":"The visualization of large-scale time-varying data can provide scientists a more in-depth understanding of the inherent physical phenomena behind the massive data. However, because of non-uniform data access speed and memory capacity bottlenecks, the interactive rendering ability for large-scale time-varying data is still a major challenge. Data compression can alleviate these two bottlenecks. But just simply applying the data compression strategy to the visualization pipeline, the interaction problem can not be effectively solved because a lot of redundant data still existed in volume data. In this paper, a smart compression scheme based on the information theory is present to accelerate large-scale time-varying volume rendering. A formula of entropy was proposed, which can be used to automatically calculate the data importance to help scientists analyze and extract feature from the massive data. Then lossy data compression and data transfer is directly operated on these feature data, the remaining non-critical data was discarded in the process, and GPU ray-casting volume render is used for fast rendering. The experiment results shown that our smart compression scheme can reduce the amount of data as much as possible while maintaining the characteristics of the data, and therefore greatly improved the time-varying volume rendering speed even when dealing with the large scale time-varying data.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131215383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhang Mingming, Zhou Yu, Xiang Xueqin, Pan Zhigeng
Time-of-flight range camera (referred to as TOF camera) has many advantages: compact, easy to use and can obtain three-dimensional depth data of any scene in real-time, which makes it increasingly widely used in various applications. However, TOF camera produces very noisy depth map and often performs poorly on the rich-textured scenes, such as textiles, in which situations stereo vision excels. To solve this problem and use the merits of two methods respectively, we propose a method for jointly using TOF cameras and stereo vision to produce a high quality depth map for surfaces that have rich texture. We choose textiles as experimental object, and the result shows that our method significantly improves the quality of the textile's depth data captured from TOF camera. So we also expand the scope of application of TOF cameras.
{"title":"High Quality Range Data Acquisition Using Time-of-Flight Camera and Stereo Vision","authors":"Zhang Mingming, Zhou Yu, Xiang Xueqin, Pan Zhigeng","doi":"10.1109/ICVRV.2011.19","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.19","url":null,"abstract":"Time-of-flight range camera (referred to as TOF camera) has many advantages: compact, easy to use and can obtain three-dimensional depth data of any scene in real-time, which makes it increasingly widely used in various applications. However, TOF camera produces very noisy depth map and often performs poorly on the rich-textured scenes, such as textiles, in which situations stereo vision excels. To solve this problem and use the merits of two methods respectively, we propose a method for jointly using TOF cameras and stereo vision to produce a high quality depth map for surfaces that have rich texture. We choose textiles as experimental object, and the result shows that our method significantly improves the quality of the textile's depth data captured from TOF camera. So we also expand the scope of application of TOF cameras.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123185498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The virtual driving platform includes hardware and software two parts. Hardware parts refer to the manipulation of institutions such as the steering wheel, throttle and brake pedals, etc. Software parts mainly refer to the virtual traffic environment, the driver can interact with the virtual traffic environment. There are two main factors which determine the fidelity of the virtual driving environment, one is realistic of three-dimensional static traffic scenes, and the other is driving behavior of intelligent autonomous vehicles in the platform. The paper made some analysis and researches on the key technologies of virtual traffic environment and some methods to create intelligent autonomous vehicle in this virtual platform.
{"title":"Key Technologies of the Virtual Driving Platform Based on EON","authors":"Jiajun He, Wen-jun Hou","doi":"10.1109/ICVRV.2011.25","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.25","url":null,"abstract":"The virtual driving platform includes hardware and software two parts. Hardware parts refer to the manipulation of institutions such as the steering wheel, throttle and brake pedals, etc. Software parts mainly refer to the virtual traffic environment, the driver can interact with the virtual traffic environment. There are two main factors which determine the fidelity of the virtual driving environment, one is realistic of three-dimensional static traffic scenes, and the other is driving behavior of intelligent autonomous vehicles in the platform. The paper made some analysis and researches on the key technologies of virtual traffic environment and some methods to create intelligent autonomous vehicle in this virtual platform.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124487074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Programmable shader is a powerful tool to describe objects' appearances in the realm of computer graphics. However, executing shaders takes up much time during rendering and can easily go beyond the computer hardware capability. We present a novel method to simplify both programmable shaders based on Render Man Shading Language and geometries for reducing rendering time with little quality loss. Given a level, we use progressive mesh to simplify geometries adaptively to obtain an appropriate representation, besides, shaders can also be automatically simplified by applying our simplification rules. To our knowledge, research on geometry level of detail often combines texture level of detail, whereas our approach is the first to implement the combination of geometry level of detail and shader level of detail.
{"title":"An Adaptive Method for Shader Simplification","authors":"Xijun Song, Changhe Tu, Yanning Xu","doi":"10.1109/ICVRV.2011.12","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.12","url":null,"abstract":"Programmable shader is a powerful tool to describe objects' appearances in the realm of computer graphics. However, executing shaders takes up much time during rendering and can easily go beyond the computer hardware capability. We present a novel method to simplify both programmable shaders based on Render Man Shading Language and geometries for reducing rendering time with little quality loss. Given a level, we use progressive mesh to simplify geometries adaptively to obtain an appropriate representation, besides, shaders can also be automatically simplified by applying our simplification rules. To our knowledge, research on geometry level of detail often combines texture level of detail, whereas our approach is the first to implement the combination of geometry level of detail and shader level of detail.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114582060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advances in distributed system technology have created new possibilities for innovation in simulation and the creation of new tools and facilities that could improve the productivity of simulation. This paper describes a multi agents based collaborative simulation system for autonomous undersea vehicles. Multi-agent and the collaborative module of agents are used to resolve the problem of the existing simulation system. The detail of every agent in the system is described. In addition, the collaboration of agents and the decision rules are also introduced. The paper then presents results of autonomous underwater vehicle (AUV) simulation tests on the system.
{"title":"A Cooperative Simulation System for AUV Based on Multi-agent","authors":"Zhuo Wang, Xiaoning Feng","doi":"10.1109/ICVRV.2011.48","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.48","url":null,"abstract":"Advances in distributed system technology have created new possibilities for innovation in simulation and the creation of new tools and facilities that could improve the productivity of simulation. This paper describes a multi agents based collaborative simulation system for autonomous undersea vehicles. Multi-agent and the collaborative module of agents are used to resolve the problem of the existing simulation system. The detail of every agent in the system is described. In addition, the collaboration of agents and the decision rules are also introduced. The paper then presents results of autonomous underwater vehicle (AUV) simulation tests on the system.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122580168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a simple method for modeling smoke from a single view, which could preserve fine realistic look observed from different views. Thin translucent smoke, generated by cigarette, candle, joss stick et al, is the main focus of this paper. The proposed method initially computes the smoke intensity from the input key frame image, then partitions the smoke into multiple segments. For each segment of smoke, the principal direction is calculated by principal component analysis, and two basic functions are generated. Depth of each pixel in the image is estimated with the basic functions. Then, three-dimensional density distributions are constructed by referring to the intensity and depth. Finally, smoke density distributions at key frames are used to generate animated smoke. Experiment results indicate that our method can synthesize visually realistic smoke from a single view with low computational cost.
{"title":"Modeling of Smoke from a Single View","authors":"Zhengyan Liu, Yong Hu, Yue Qi","doi":"10.1109/ICVRV.2011.8","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.8","url":null,"abstract":"This paper presents a simple method for modeling smoke from a single view, which could preserve fine realistic look observed from different views. Thin translucent smoke, generated by cigarette, candle, joss stick et al, is the main focus of this paper. The proposed method initially computes the smoke intensity from the input key frame image, then partitions the smoke into multiple segments. For each segment of smoke, the principal direction is calculated by principal component analysis, and two basic functions are generated. Depth of each pixel in the image is estimated with the basic functions. Then, three-dimensional density distributions are constructed by referring to the intensity and depth. Finally, smoke density distributions at key frames are used to generate animated smoke. Experiment results indicate that our method can synthesize visually realistic smoke from a single view with low computational cost.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129082256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wu Guoqing, Cao Yi, Yin Junping, Wang Huawei, Song Lei
Visualization of large scale time-varying scientific data has been a challenging problem due to their ever-increasing size. Identifying and presenting the most informative (or important) aspects of the data plays an important role in facilitating an efficient visualization. In this paper, an information assisted method is presented to locate temporal and spatial data containing salient physical features and accordingly accelerate the visualization process. To locate temporal data, two information-theoretic measures are utilized, i.e. the KL-distance, which measures information dissimilarity of different time steps, and the off-line marginal utility, which measures surprisingly information provided by each time step. To locate spatial data, a character factor is introduced which measures feature abundance of each sub-region. Based on these information measures, the method adaptively picks up important time steps and sub-regions with the maximum information content so that the time-varying data can be effectively visualized in limited time or using limited resources without loss of potential useful physical features. The experiments on the data of radiation diffusion dynamics and plasma physics simulation demonstrate the effectiveness of the proposed method. The method can remarkably improve the way in which scientists analyze and understand large scale time-varying scientific data.
{"title":"Information Assisted Visualization of Large Scale Time Varying Scientific Data","authors":"Wu Guoqing, Cao Yi, Yin Junping, Wang Huawei, Song Lei","doi":"10.1109/ICVRV.2011.39","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.39","url":null,"abstract":"Visualization of large scale time-varying scientific data has been a challenging problem due to their ever-increasing size. Identifying and presenting the most informative (or important) aspects of the data plays an important role in facilitating an efficient visualization. In this paper, an information assisted method is presented to locate temporal and spatial data containing salient physical features and accordingly accelerate the visualization process. To locate temporal data, two information-theoretic measures are utilized, i.e. the KL-distance, which measures information dissimilarity of different time steps, and the off-line marginal utility, which measures surprisingly information provided by each time step. To locate spatial data, a character factor is introduced which measures feature abundance of each sub-region. Based on these information measures, the method adaptively picks up important time steps and sub-regions with the maximum information content so that the time-varying data can be effectively visualized in limited time or using limited resources without loss of potential useful physical features. The experiments on the data of radiation diffusion dynamics and plasma physics simulation demonstrate the effectiveness of the proposed method. The method can remarkably improve the way in which scientists analyze and understand large scale time-varying scientific data.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123333766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic concept detection is a key technique to video semantic indexing. Traditional approaches did not take account of conceptual correlation adequately. A new approach based on conceptual correlation and boosting is proposed in this paper, including three steps: the context based conceptual fusion models using correlative concepts selection are built at first, then a boosting process based on inter-concept correlation is implemented, finally multi-models generated in boosting are fusioned. The experimental results on Trecvid2005 dataset show that the proposed method achieves more remarkable and consistent improvement.
{"title":"Video Semantic Concept Detection Based on Conceptual Correlation and Boosting","authors":"Dan-Wen Chen, Liqiong Deng, Lingda Wu","doi":"10.1109/ICVRV.2011.42","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.42","url":null,"abstract":"Semantic concept detection is a key technique to video semantic indexing. Traditional approaches did not take account of conceptual correlation adequately. A new approach based on conceptual correlation and boosting is proposed in this paper, including three steps: the context based conceptual fusion models using correlative concepts selection are built at first, then a boosting process based on inter-concept correlation is implemented, finally multi-models generated in boosting are fusioned. The experimental results on Trecvid2005 dataset show that the proposed method achieves more remarkable and consistent improvement.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124118434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel method for retargeting motions. In this method we consider the whole leg as a length changeable skeleton, through keeping the length proportion and direction of the leg vector before and after retargeting we can accomplish the motion retargeting by scale the root node. Because we transform the constraint of foot position to the constraints of the leg vector's length and direction and adjusting the leg vector is easy, our method need not the complex optimization algorithm. The experimental results show that the method is a real-time method and the characteristic of foot trajectory can be kept after retargeting.
{"title":"Foot Trajectory Kept Motion Retargeting","authors":"Xiaomeng Feng, Shi Qu, Lingda Wu","doi":"10.1109/ICVRV.2011.34","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.34","url":null,"abstract":"This paper presents a novel method for retargeting motions. In this method we consider the whole leg as a length changeable skeleton, through keeping the length proportion and direction of the leg vector before and after retargeting we can accomplish the motion retargeting by scale the root node. Because we transform the constraint of foot position to the constraints of the leg vector's length and direction and adjusting the leg vector is easy, our method need not the complex optimization algorithm. The experimental results show that the method is a real-time method and the characteristic of foot trajectory can be kept after retargeting.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124414104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}