首页 > 最新文献

2011 International Conference on Virtual Reality and Visualization最新文献

英文 中文
GPU-Based Computation of the Integral Image 基于gpu的积分图像计算
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.43
Wei Huang, Ling-Da Wu, Yougen Zhang
The integral image can be used to quickly complete common pixel-level operations in the regular region of the grey-level image. So it has been widely used in the field of computer vision and pattern recognition. In this paper, we firstly present an intuitive parallel method to compute the integral image. Then based on the intuitive method, a two-stage method based on the binary tree is introduced. In each stage of the algorithm, we do a firstly top-down and secondly bottom-up traversal over the tree. Finally, we analyze the case of large-scale grey-level image and optimize the computation based on the CUD Aarchitecture. We have done the experiment in the consumerlevel PC hardware which shows that the GPU-based algorithm outperforms the corresponded CPU-based algorithm in terms of speed in case of large-scale images.
利用积分图像可以在灰度图像的规则区域内快速完成常见的像素级操作。因此在计算机视觉和模式识别领域得到了广泛的应用。本文首先提出了一种直观的计算积分图像的并行方法。然后在直观方法的基础上,提出了一种基于二叉树的两阶段方法。在算法的每个阶段,我们首先对树进行自顶向下和自底向上的遍历。最后,以大规模灰度图像为例,对基于cuda架构的计算进行了优化。我们在消费级PC硬件上进行了实验,实验结果表明,在处理大规模图像时,基于gpu的算法在速度上优于相应的基于cpu的算法。
{"title":"GPU-Based Computation of the Integral Image","authors":"Wei Huang, Ling-Da Wu, Yougen Zhang","doi":"10.1109/ICVRV.2011.43","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.43","url":null,"abstract":"The integral image can be used to quickly complete common pixel-level operations in the regular region of the grey-level image. So it has been widely used in the field of computer vision and pattern recognition. In this paper, we firstly present an intuitive parallel method to compute the integral image. Then based on the intuitive method, a two-stage method based on the binary tree is introduced. In each stage of the algorithm, we do a firstly top-down and secondly bottom-up traversal over the tree. Finally, we analyze the case of large-scale grey-level image and optimize the computation based on the CUD Aarchitecture. We have done the experiment in the consumerlevel PC hardware which shows that the GPU-based algorithm outperforms the corresponded CPU-based algorithm in terms of speed in case of large-scale images.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132543573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Smart Compression Scheme for GPU-Accelerated Volume Rendering of Time-Varying Data 时变数据gpu加速体绘制的智能压缩方案
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.56
Yi Cao, Guoqing Wu, Huawei Wang
The visualization of large-scale time-varying data can provide scientists a more in-depth understanding of the inherent physical phenomena behind the massive data. However, because of non-uniform data access speed and memory capacity bottlenecks, the interactive rendering ability for large-scale time-varying data is still a major challenge. Data compression can alleviate these two bottlenecks. But just simply applying the data compression strategy to the visualization pipeline, the interaction problem can not be effectively solved because a lot of redundant data still existed in volume data. In this paper, a smart compression scheme based on the information theory is present to accelerate large-scale time-varying volume rendering. A formula of entropy was proposed, which can be used to automatically calculate the data importance to help scientists analyze and extract feature from the massive data. Then lossy data compression and data transfer is directly operated on these feature data, the remaining non-critical data was discarded in the process, and GPU ray-casting volume render is used for fast rendering. The experiment results shown that our smart compression scheme can reduce the amount of data as much as possible while maintaining the characteristics of the data, and therefore greatly improved the time-varying volume rendering speed even when dealing with the large scale time-varying data.
大规模时变数据的可视化可以让科学家更深入地了解海量数据背后固有的物理现象。然而,由于数据访问速度的不均匀和内存容量的瓶颈,大规模时变数据的交互呈现能力仍然是一个主要的挑战。数据压缩可以缓解这两个瓶颈。但仅仅将数据压缩策略应用到可视化管道中,由于体数据中仍然存在大量冗余数据,无法有效解决交互问题。本文提出了一种基于信息理论的智能压缩方案,以加速大规模时变体绘制。提出了一种熵的计算公式,可以自动计算数据的重要度,帮助科学家从海量数据中分析和提取特征。然后直接对这些特征数据进行有损数据压缩和数据传输,在此过程中丢弃剩余的非关键数据,采用GPU光线投射体渲染进行快速渲染。实验结果表明,我们的智能压缩方案可以在保持数据特征的同时尽可能地减少数据量,从而在处理大规模时变数据时也大大提高了时变体绘制速度。
{"title":"A Smart Compression Scheme for GPU-Accelerated Volume Rendering of Time-Varying Data","authors":"Yi Cao, Guoqing Wu, Huawei Wang","doi":"10.1109/ICVRV.2011.56","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.56","url":null,"abstract":"The visualization of large-scale time-varying data can provide scientists a more in-depth understanding of the inherent physical phenomena behind the massive data. However, because of non-uniform data access speed and memory capacity bottlenecks, the interactive rendering ability for large-scale time-varying data is still a major challenge. Data compression can alleviate these two bottlenecks. But just simply applying the data compression strategy to the visualization pipeline, the interaction problem can not be effectively solved because a lot of redundant data still existed in volume data. In this paper, a smart compression scheme based on the information theory is present to accelerate large-scale time-varying volume rendering. A formula of entropy was proposed, which can be used to automatically calculate the data importance to help scientists analyze and extract feature from the massive data. Then lossy data compression and data transfer is directly operated on these feature data, the remaining non-critical data was discarded in the process, and GPU ray-casting volume render is used for fast rendering. The experiment results shown that our smart compression scheme can reduce the amount of data as much as possible while maintaining the characteristics of the data, and therefore greatly improved the time-varying volume rendering speed even when dealing with the large scale time-varying data.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131215383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
High Quality Range Data Acquisition Using Time-of-Flight Camera and Stereo Vision 使用飞行时间相机和立体视觉的高质量距离数据采集
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.19
Zhang Mingming, Zhou Yu, Xiang Xueqin, Pan Zhigeng
Time-of-flight range camera (referred to as TOF camera) has many advantages: compact, easy to use and can obtain three-dimensional depth data of any scene in real-time, which makes it increasingly widely used in various applications. However, TOF camera produces very noisy depth map and often performs poorly on the rich-textured scenes, such as textiles, in which situations stereo vision excels. To solve this problem and use the merits of two methods respectively, we propose a method for jointly using TOF cameras and stereo vision to produce a high quality depth map for surfaces that have rich texture. We choose textiles as experimental object, and the result shows that our method significantly improves the quality of the textile's depth data captured from TOF camera. So we also expand the scope of application of TOF cameras.
飞行时间距离相机(简称TOF相机)具有结构紧凑、使用方便、可以实时获取任意场景的三维深度数据等诸多优点,在各种应用中得到越来越广泛的应用。然而,TOF相机会产生非常嘈杂的深度图,并且通常在纹理丰富的场景中表现不佳,例如纺织品,而立体视觉在这些场景中表现出色。为了解决这一问题,并利用两种方法各自的优点,我们提出了一种将TOF相机与立体视觉相结合的方法,对具有丰富纹理的表面生成高质量的深度图。我们选择纺织品作为实验对象,结果表明,我们的方法显著提高了TOF相机捕获的纺织品深度数据的质量。从而也拓展了TOF相机的应用范围。
{"title":"High Quality Range Data Acquisition Using Time-of-Flight Camera and Stereo Vision","authors":"Zhang Mingming, Zhou Yu, Xiang Xueqin, Pan Zhigeng","doi":"10.1109/ICVRV.2011.19","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.19","url":null,"abstract":"Time-of-flight range camera (referred to as TOF camera) has many advantages: compact, easy to use and can obtain three-dimensional depth data of any scene in real-time, which makes it increasingly widely used in various applications. However, TOF camera produces very noisy depth map and often performs poorly on the rich-textured scenes, such as textiles, in which situations stereo vision excels. To solve this problem and use the merits of two methods respectively, we propose a method for jointly using TOF cameras and stereo vision to produce a high quality depth map for surfaces that have rich texture. We choose textiles as experimental object, and the result shows that our method significantly improves the quality of the textile's depth data captured from TOF camera. So we also expand the scope of application of TOF cameras.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123185498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Key Technologies of the Virtual Driving Platform Based on EON 基于EON的虚拟驾驶平台关键技术研究
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.25
Jiajun He, Wen-jun Hou
The virtual driving platform includes hardware and software two parts. Hardware parts refer to the manipulation of institutions such as the steering wheel, throttle and brake pedals, etc. Software parts mainly refer to the virtual traffic environment, the driver can interact with the virtual traffic environment. There are two main factors which determine the fidelity of the virtual driving environment, one is realistic of three-dimensional static traffic scenes, and the other is driving behavior of intelligent autonomous vehicles in the platform. The paper made some analysis and researches on the key technologies of virtual traffic environment and some methods to create intelligent autonomous vehicle in this virtual platform.
虚拟驾驶平台包括硬件和软件两部分。五金件是指方向盘、油门、刹车踏板等操纵机构。软件部分主要是指虚拟交通环境,驾驶员可以与虚拟交通环境进行交互。决定虚拟驾驶环境保真度的主要因素有两个,一个是三维静态交通场景的真实感,另一个是智能自动驾驶汽车在平台中的驾驶行为。本文对虚拟交通环境的关键技术以及在该虚拟平台中实现智能自动驾驶汽车的方法进行了分析和研究。
{"title":"Key Technologies of the Virtual Driving Platform Based on EON","authors":"Jiajun He, Wen-jun Hou","doi":"10.1109/ICVRV.2011.25","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.25","url":null,"abstract":"The virtual driving platform includes hardware and software two parts. Hardware parts refer to the manipulation of institutions such as the steering wheel, throttle and brake pedals, etc. Software parts mainly refer to the virtual traffic environment, the driver can interact with the virtual traffic environment. There are two main factors which determine the fidelity of the virtual driving environment, one is realistic of three-dimensional static traffic scenes, and the other is driving behavior of intelligent autonomous vehicles in the platform. The paper made some analysis and researches on the key technologies of virtual traffic environment and some methods to create intelligent autonomous vehicle in this virtual platform.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124487074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Adaptive Method for Shader Simplification 一种自适应的着色器简化方法
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.12
Xijun Song, Changhe Tu, Yanning Xu
Programmable shader is a powerful tool to describe objects' appearances in the realm of computer graphics. However, executing shaders takes up much time during rendering and can easily go beyond the computer hardware capability. We present a novel method to simplify both programmable shaders based on Render Man Shading Language and geometries for reducing rendering time with little quality loss. Given a level, we use progressive mesh to simplify geometries adaptively to obtain an appropriate representation, besides, shaders can also be automatically simplified by applying our simplification rules. To our knowledge, research on geometry level of detail often combines texture level of detail, whereas our approach is the first to implement the combination of geometry level of detail and shader level of detail.
在计算机图形领域,可编程着色器是描述物体外观的强大工具。然而,在渲染过程中执行着色器会占用大量时间,并且很容易超出计算机硬件的能力。我们提出了一种新的方法来简化基于渲染人着色语言和几何图形的可编程着色器,以减少渲染时间和质量损失。给定一个关卡,我们使用渐进式网格自适应地简化几何图形以获得合适的表示,此外,着色器也可以通过应用我们的简化规则自动简化。据我们所知,对几何细节层次的研究通常结合纹理细节层次,而我们的方法是第一个实现几何细节层次和着色细节层次的结合。
{"title":"An Adaptive Method for Shader Simplification","authors":"Xijun Song, Changhe Tu, Yanning Xu","doi":"10.1109/ICVRV.2011.12","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.12","url":null,"abstract":"Programmable shader is a powerful tool to describe objects' appearances in the realm of computer graphics. However, executing shaders takes up much time during rendering and can easily go beyond the computer hardware capability. We present a novel method to simplify both programmable shaders based on Render Man Shading Language and geometries for reducing rendering time with little quality loss. Given a level, we use progressive mesh to simplify geometries adaptively to obtain an appropriate representation, besides, shaders can also be automatically simplified by applying our simplification rules. To our knowledge, research on geometry level of detail often combines texture level of detail, whereas our approach is the first to implement the combination of geometry level of detail and shader level of detail.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114582060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Cooperative Simulation System for AUV Based on Multi-agent 基于多智能体的AUV协同仿真系统
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.48
Zhuo Wang, Xiaoning Feng
Advances in distributed system technology have created new possibilities for innovation in simulation and the creation of new tools and facilities that could improve the productivity of simulation. This paper describes a multi agents based collaborative simulation system for autonomous undersea vehicles. Multi-agent and the collaborative module of agents are used to resolve the problem of the existing simulation system. The detail of every agent in the system is described. In addition, the collaboration of agents and the decision rules are also introduced. The paper then presents results of autonomous underwater vehicle (AUV) simulation tests on the system.
分布式系统技术的进步为仿真创新创造了新的可能性,创造了新的工具和设施,可以提高仿真的生产率。介绍了一种基于多智能体的自主水下航行器协同仿真系统。采用多智能体和智能体的协同模块来解决现有仿真系统存在的问题。详细描述了系统中每个agent的详细信息。此外,还介绍了智能体之间的协作和决策规则。然后给出了该系统的自主水下航行器(AUV)仿真试验结果。
{"title":"A Cooperative Simulation System for AUV Based on Multi-agent","authors":"Zhuo Wang, Xiaoning Feng","doi":"10.1109/ICVRV.2011.48","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.48","url":null,"abstract":"Advances in distributed system technology have created new possibilities for innovation in simulation and the creation of new tools and facilities that could improve the productivity of simulation. This paper describes a multi agents based collaborative simulation system for autonomous undersea vehicles. Multi-agent and the collaborative module of agents are used to resolve the problem of the existing simulation system. The detail of every agent in the system is described. In addition, the collaboration of agents and the decision rules are also introduced. The paper then presents results of autonomous underwater vehicle (AUV) simulation tests on the system.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122580168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Modeling of Smoke from a Single View 从单一视图建模烟雾
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.8
Zhengyan Liu, Yong Hu, Yue Qi
This paper presents a simple method for modeling smoke from a single view, which could preserve fine realistic look observed from different views. Thin translucent smoke, generated by cigarette, candle, joss stick et al, is the main focus of this paper. The proposed method initially computes the smoke intensity from the input key frame image, then partitions the smoke into multiple segments. For each segment of smoke, the principal direction is calculated by principal component analysis, and two basic functions are generated. Depth of each pixel in the image is estimated with the basic functions. Then, three-dimensional density distributions are constructed by referring to the intensity and depth. Finally, smoke density distributions at key frames are used to generate animated smoke. Experiment results indicate that our method can synthesize visually realistic smoke from a single view with low computational cost.
本文提出了一种简单的单视图烟雾建模方法,该方法可以保持从不同视图观察到的逼真效果。由香烟、蜡烛、香烛等产生的薄而半透明的烟雾是本文的主要焦点。该方法首先从输入的关键帧图像中计算烟雾强度,然后将烟雾分割成多个片段。对于每一段烟雾,通过主成分分析计算主方向,生成两个基本函数。利用基本函数估计图像中每个像素的深度。然后,根据强度和深度构造三维密度分布。最后,使用关键帧的烟雾密度分布来生成动画烟雾。实验结果表明,该方法可以在较低的计算成本下从单视图合成具有视觉真实感的烟雾。
{"title":"Modeling of Smoke from a Single View","authors":"Zhengyan Liu, Yong Hu, Yue Qi","doi":"10.1109/ICVRV.2011.8","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.8","url":null,"abstract":"This paper presents a simple method for modeling smoke from a single view, which could preserve fine realistic look observed from different views. Thin translucent smoke, generated by cigarette, candle, joss stick et al, is the main focus of this paper. The proposed method initially computes the smoke intensity from the input key frame image, then partitions the smoke into multiple segments. For each segment of smoke, the principal direction is calculated by principal component analysis, and two basic functions are generated. Depth of each pixel in the image is estimated with the basic functions. Then, three-dimensional density distributions are constructed by referring to the intensity and depth. Finally, smoke density distributions at key frames are used to generate animated smoke. Experiment results indicate that our method can synthesize visually realistic smoke from a single view with low computational cost.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129082256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Information Assisted Visualization of Large Scale Time Varying Scientific Data
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.39
Wu Guoqing, Cao Yi, Yin Junping, Wang Huawei, Song Lei
Visualization of large scale time-varying scientific data has been a challenging problem due to their ever-increasing size. Identifying and presenting the most informative (or important) aspects of the data plays an important role in facilitating an efficient visualization. In this paper, an information assisted method is presented to locate temporal and spatial data containing salient physical features and accordingly accelerate the visualization process. To locate temporal data, two information-theoretic measures are utilized, i.e. the KL-distance, which measures information dissimilarity of different time steps, and the off-line marginal utility, which measures surprisingly information provided by each time step. To locate spatial data, a character factor is introduced which measures feature abundance of each sub-region. Based on these information measures, the method adaptively picks up important time steps and sub-regions with the maximum information content so that the time-varying data can be effectively visualized in limited time or using limited resources without loss of potential useful physical features. The experiments on the data of radiation diffusion dynamics and plasma physics simulation demonstrate the effectiveness of the proposed method. The method can remarkably improve the way in which scientists analyze and understand large scale time-varying scientific data.
随着大规模时变科学数据规模的不断扩大,数据的可视化一直是一个具有挑战性的问题。识别和呈现数据中最有信息(或最重要)的方面在促进高效可视化方面起着重要作用。本文提出了一种信息辅助的方法来定位包含显著物理特征的时空数据,从而加快可视化过程。为了定位时间数据,使用了两个信息论度量,即度量不同时间步长信息不相似性的KL-distance和度量每个时间步长提供的意外信息的离线边际效用。为了对空间数据进行定位,引入特征因子来衡量每个子区域的特征丰度。基于这些信息度量,该方法自适应地提取信息含量最大的重要时间步长和子区域,使时变数据在有限的时间或有限的资源内有效地可视化,而不损失潜在的有用物理特征。对辐射扩散动力学和等离子体物理模拟数据的实验验证了该方法的有效性。该方法可以显著提高科学家分析和理解大尺度时变科学数据的方式。
{"title":"Information Assisted Visualization of Large Scale Time Varying Scientific Data","authors":"Wu Guoqing, Cao Yi, Yin Junping, Wang Huawei, Song Lei","doi":"10.1109/ICVRV.2011.39","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.39","url":null,"abstract":"Visualization of large scale time-varying scientific data has been a challenging problem due to their ever-increasing size. Identifying and presenting the most informative (or important) aspects of the data plays an important role in facilitating an efficient visualization. In this paper, an information assisted method is presented to locate temporal and spatial data containing salient physical features and accordingly accelerate the visualization process. To locate temporal data, two information-theoretic measures are utilized, i.e. the KL-distance, which measures information dissimilarity of different time steps, and the off-line marginal utility, which measures surprisingly information provided by each time step. To locate spatial data, a character factor is introduced which measures feature abundance of each sub-region. Based on these information measures, the method adaptively picks up important time steps and sub-regions with the maximum information content so that the time-varying data can be effectively visualized in limited time or using limited resources without loss of potential useful physical features. The experiments on the data of radiation diffusion dynamics and plasma physics simulation demonstrate the effectiveness of the proposed method. The method can remarkably improve the way in which scientists analyze and understand large scale time-varying scientific data.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123333766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Video Semantic Concept Detection Based on Conceptual Correlation and Boosting 基于概念相关和增强的视频语义概念检测
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.42
Dan-Wen Chen, Liqiong Deng, Lingda Wu
Semantic concept detection is a key technique to video semantic indexing. Traditional approaches did not take account of conceptual correlation adequately. A new approach based on conceptual correlation and boosting is proposed in this paper, including three steps: the context based conceptual fusion models using correlative concepts selection are built at first, then a boosting process based on inter-concept correlation is implemented, finally multi-models generated in boosting are fusioned. The experimental results on Trecvid2005 dataset show that the proposed method achieves more remarkable and consistent improvement.
语义概念检测是视频语义索引的关键技术。传统的方法没有充分考虑到概念相关性。提出了一种基于概念关联和增强的方法,该方法包括三个步骤:首先建立基于上下文的概念融合模型,利用相关概念选择,然后实现基于概念间关联的增强过程,最后对增强过程中产生的多个模型进行融合。在Trecvid2005数据集上的实验结果表明,该方法取得了更为显著的一致性改进。
{"title":"Video Semantic Concept Detection Based on Conceptual Correlation and Boosting","authors":"Dan-Wen Chen, Liqiong Deng, Lingda Wu","doi":"10.1109/ICVRV.2011.42","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.42","url":null,"abstract":"Semantic concept detection is a key technique to video semantic indexing. Traditional approaches did not take account of conceptual correlation adequately. A new approach based on conceptual correlation and boosting is proposed in this paper, including three steps: the context based conceptual fusion models using correlative concepts selection are built at first, then a boosting process based on inter-concept correlation is implemented, finally multi-models generated in boosting are fusioned. The experimental results on Trecvid2005 dataset show that the proposed method achieves more remarkable and consistent improvement.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124118434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foot Trajectory Kept Motion Retargeting 脚轨迹保持运动重新定位
Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.34
Xiaomeng Feng, Shi Qu, Lingda Wu
This paper presents a novel method for retargeting motions. In this method we consider the whole leg as a length changeable skeleton, through keeping the length proportion and direction of the leg vector before and after retargeting we can accomplish the motion retargeting by scale the root node. Because we transform the constraint of foot position to the constraints of the leg vector's length and direction and adjusting the leg vector is easy, our method need not the complex optimization algorithm. The experimental results show that the method is a real-time method and the characteristic of foot trajectory can be kept after retargeting.
提出了一种新的运动重定向方法。该方法将整条腿视为一个长度可变的骨架,通过保持重定向前后腿向量的长度比例和方向,通过缩放根节点来实现运动重定向。由于将脚的位置约束转化为腿向量的长度和方向约束,且腿向量的调整容易,因此该方法无需复杂的优化算法。实验结果表明,该方法是一种实时性强的方法,并且可以在重新瞄准后保持足部轨迹特征。
{"title":"Foot Trajectory Kept Motion Retargeting","authors":"Xiaomeng Feng, Shi Qu, Lingda Wu","doi":"10.1109/ICVRV.2011.34","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.34","url":null,"abstract":"This paper presents a novel method for retargeting motions. In this method we consider the whole leg as a length changeable skeleton, through keeping the length proportion and direction of the leg vector before and after retargeting we can accomplish the motion retargeting by scale the root node. Because we transform the constraint of foot position to the constraints of the leg vector's length and direction and adjusting the leg vector is easy, our method need not the complex optimization algorithm. The experimental results show that the method is a real-time method and the characteristic of foot trajectory can be kept after retargeting.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124414104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2011 International Conference on Virtual Reality and Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1