首页 > 最新文献

2013 International Conference on Virtual Reality and Visualization最新文献

英文 中文
Anaglyph 3D Stereoscopic Visualization of 2D Video Based on Fundamental Matrix 基于基本矩阵的二维视频浮雕三维立体可视化
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.59
Zhihan Lu, S. Réhman, Muhammad Sikandar Lal Khan, Haibo Li
In this paper, we propose a simple Anaglyph 3D stereo generation algorithm from 2D video sequence with monocular camera. In our novel approach we employ camera pose estimation method to directly generate stereoscopic 3D from 2D video without building depth map explicitly. Our cost effective method is suitable for arbitrary real-world video sequence and produces smooth results. We use image stitching based on plane correspondence using fundamental matrix. To this end we also demonstrate that correspondence plane image stitching based on Homography matrix only cannot generate better result. Furthermore, we utilize the structure from motion (with fundamental matrix) based reconstructed camera pose model to accomplish visual anaglyph 3D illusion. The proposed approach demonstrates a very good performance for most of the video sequences.
本文提出了一种简单的基于单目摄像机的二维视频序列的浮雕三维立体生成算法。在我们的新方法中,我们采用摄像机姿态估计方法直接从2D视频生成立体3D,而不需要显式地构建深度图。该方法适用于任意现实世界的视频序列,并产生平滑的结果。我们利用基本矩阵进行基于平面对应的图像拼接。为此,我们还证明了仅基于单应性矩阵的对应平面图像拼接不能产生更好的结果。在此基础上,利用基于运动结构(带基本矩阵)的重构相机姿态模型,实现了视觉上的立体错觉。该方法对大多数视频序列都有很好的处理效果。
{"title":"Anaglyph 3D Stereoscopic Visualization of 2D Video Based on Fundamental Matrix","authors":"Zhihan Lu, S. Réhman, Muhammad Sikandar Lal Khan, Haibo Li","doi":"10.1109/ICVRV.2013.59","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.59","url":null,"abstract":"In this paper, we propose a simple Anaglyph 3D stereo generation algorithm from 2D video sequence with monocular camera. In our novel approach we employ camera pose estimation method to directly generate stereoscopic 3D from 2D video without building depth map explicitly. Our cost effective method is suitable for arbitrary real-world video sequence and produces smooth results. We use image stitching based on plane correspondence using fundamental matrix. To this end we also demonstrate that correspondence plane image stitching based on Homography matrix only cannot generate better result. Furthermore, we utilize the structure from motion (with fundamental matrix) based reconstructed camera pose model to accomplish visual anaglyph 3D illusion. The proposed approach demonstrates a very good performance for most of the video sequences.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116907484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Laser Sheet Scanning Based Smoke Acquisition and Reconstruction 基于激光扫描的烟雾采集与重建
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.15
Xin Gao, Yong Hu, Qing Zuo, Yue Qi
This paper develops a laser sheet scanning based technique for capturing and reconstructing sequential volumetric models of smoke. First, a dedicated setup is introduced as the laser sheet illuminator in horizontal scanning. To achieve an accurate acquisition, a signal synchronized scheme is added between the galvanometer and the high-speed camera. Then, with a laser sheet sweeping through the volume repeatedly, the illuminated smoke slices are captured. Each sweep of the laser records a near-simultaneous smoke density field. In next reconstruction procedure, through camera and laser calibrations, 3D real positions of the pixels of captured images is calculated. Finally, these irregular smoke density fields are re sampled by a 3D original Kriging interpolation algorithm and are reconstructed to regular smoke volumetric models. In experimental results, the fidelity of visualized smoke volumetric models reconstructed by our smoke modeling method demonstrates that our approach can make a good effect on realistic smoke modeling.
本文提出了一种基于激光扫描的烟雾序列体积模型捕获和重建技术。首先,介绍了一种专用装置作为水平扫描中的激光片照明器。为了实现准确的采集,在振镜和高速摄像机之间增加了信号同步方案。然后,用激光片反复扫过体积,被照亮的烟雾片被捕获。每次激光扫描都会记录下几乎同时发生的烟雾密度场。在接下来的重建过程中,通过相机和激光校准,计算捕获图像像素的三维真实位置。最后,利用三维Kriging插值算法对这些不规则的烟雾密度场进行重新采样,重构成规则的烟雾体积模型。实验结果表明,该方法重建的可视化烟雾体积模型保真度较高,可以实现逼真的烟雾建模。
{"title":"Laser Sheet Scanning Based Smoke Acquisition and Reconstruction","authors":"Xin Gao, Yong Hu, Qing Zuo, Yue Qi","doi":"10.1109/ICVRV.2013.15","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.15","url":null,"abstract":"This paper develops a laser sheet scanning based technique for capturing and reconstructing sequential volumetric models of smoke. First, a dedicated setup is introduced as the laser sheet illuminator in horizontal scanning. To achieve an accurate acquisition, a signal synchronized scheme is added between the galvanometer and the high-speed camera. Then, with a laser sheet sweeping through the volume repeatedly, the illuminated smoke slices are captured. Each sweep of the laser records a near-simultaneous smoke density field. In next reconstruction procedure, through camera and laser calibrations, 3D real positions of the pixels of captured images is calculated. Finally, these irregular smoke density fields are re sampled by a 3D original Kriging interpolation algorithm and are reconstructed to regular smoke volumetric models. In experimental results, the fidelity of visualized smoke volumetric models reconstructed by our smoke modeling method demonstrates that our approach can make a good effect on realistic smoke modeling.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Divide and Conquer Ray Tracing Algorithm Based on BVH Partition 基于BVH分区的分治光线追踪算法
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.16
Wu Zhefu, Yu Hong, Chen Bin
A new fast divide and conquer ray tracing algorithm based on BVH partition which can remove unnecessary rays in subspace is proposed to resolve the problem that bounding boxes not tightly surround the primitives by space divide scheme which then increased unnecessary rays in subspace. Its core idea is that using Bin based BVH construction algorithm to partition primitives into two parts then distributing primitives and rays into corresponding subspace using stream filter. If the number of rays and primitives which intersect a sub space meet some limit condition, the primitives and rays in the subspace then begin basic ray tracing. A comparison between divide conquer ray tracing algorithm using BVH and using space divide schemes such as Kd-tree, grid shows our method can reduce computing with unnecessary rays in subspace substantially and lead to faster performance significantly.
针对空间分割导致边界框不紧密包围原语而增加子空间中不必要射线的问题,提出了一种新的基于BVH分割的快速分治射线跟踪算法,该算法可以去除子空间中不必要的射线。其核心思想是使用基于Bin的BVH构造算法将原语划分为两部分,然后使用流过滤器将原语和射线分布到相应的子空间中。如果与子空间相交的射线和原元的数量满足一定的极限条件,则子空间中的原元和射线开始基本射线追踪。将基于BVH的分治光线跟踪算法与基于Kd-tree、grid等空间分治方案的光线跟踪算法进行了比较,结果表明,我们的算法大大减少了子空间中不必要光线的计算,显著提高了算法的性能。
{"title":"Divide and Conquer Ray Tracing Algorithm Based on BVH Partition","authors":"Wu Zhefu, Yu Hong, Chen Bin","doi":"10.1109/ICVRV.2013.16","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.16","url":null,"abstract":"A new fast divide and conquer ray tracing algorithm based on BVH partition which can remove unnecessary rays in subspace is proposed to resolve the problem that bounding boxes not tightly surround the primitives by space divide scheme which then increased unnecessary rays in subspace. Its core idea is that using Bin based BVH construction algorithm to partition primitives into two parts then distributing primitives and rays into corresponding subspace using stream filter. If the number of rays and primitives which intersect a sub space meet some limit condition, the primitives and rays in the subspace then begin basic ray tracing. A comparison between divide conquer ray tracing algorithm using BVH and using space divide schemes such as Kd-tree, grid shows our method can reduce computing with unnecessary rays in subspace substantially and lead to faster performance significantly.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"321 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127506877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Novel Depth Recovery Approach from Multi-View Stereo Based Focusing 一种基于多视点立体聚焦的深度恢复新方法
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.34
Zhaolin Xiao, Heng Yang, Qing Wang, Guoqing Zhou
In this paper, we propose a novel depth recovery method from multi-view stereo based focusing. Inspired by the 4D light field theory, we discover the relationship between classical multi-view stereo (MVS) and depth from focus (DFF) methods and concern about different frequency distribution in 2D light field space. Then we propose a way to separate the depth recovery into two steps. At the first stage, we choose some depth candidates using existing multi-view stereo method. At the second phase, the depth from focusing algorithm is employed to determine the final depth. As well known, multi-view stereo and depth from focus need different kinds of input images, which can not be acquired at the same time by using traditional imaging system. We have addressed this issue by using a camera array system and synthetic aperture photography. Both multi-view images and distinct defocus blur images can be captured at the same time. Experimental results have shown that our proposed method can take advantages of MVS and DFF and the recovered depth is better than traditional methods.
本文提出了一种基于多视点立体聚焦的深度恢复方法。受四维光场理论的启发,我们发现了经典的多视点立体(MVS)与聚焦深度(DFF)方法之间的关系,并关注了二维光场空间中不同频率的分布。然后,我们提出了一种将深度恢复分为两个步骤的方法。首先,利用现有的多视点立体方法选取深度候选点;第二阶段,采用聚焦深度算法确定最终深度。众所周知,多视角立体和焦点深度需要不同类型的输入图像,而传统成像系统无法同时获取这些图像。我们通过使用相机阵列系统和合成光圈摄影解决了这个问题。可以同时捕获多视图图像和明显的散焦模糊图像。实验结果表明,该方法可以充分利用MVS和DFF的优点,且恢复深度优于传统方法。
{"title":"A Novel Depth Recovery Approach from Multi-View Stereo Based Focusing","authors":"Zhaolin Xiao, Heng Yang, Qing Wang, Guoqing Zhou","doi":"10.1109/ICVRV.2013.34","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.34","url":null,"abstract":"In this paper, we propose a novel depth recovery method from multi-view stereo based focusing. Inspired by the 4D light field theory, we discover the relationship between classical multi-view stereo (MVS) and depth from focus (DFF) methods and concern about different frequency distribution in 2D light field space. Then we propose a way to separate the depth recovery into two steps. At the first stage, we choose some depth candidates using existing multi-view stereo method. At the second phase, the depth from focusing algorithm is employed to determine the final depth. As well known, multi-view stereo and depth from focus need different kinds of input images, which can not be acquired at the same time by using traditional imaging system. We have addressed this issue by using a camera array system and synthetic aperture photography. Both multi-view images and distinct defocus blur images can be captured at the same time. Experimental results have shown that our proposed method can take advantages of MVS and DFF and the recovered depth is better than traditional methods.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114382616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Outliers Elimination Based Ransac for Fundamental Matrix Estimation 基于离群值消除的Ransac基本矩阵估计
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.63
Shuqiang Yang, Biao Li
To accelerate the RANSAC process for fundamental matrix estimation, two special modifications about RANSAC are proposed. Firstly, in the verification stage, not the correspondences are used to verify the hypothesis but the singular values of estimated fundamental matrix are directly used to evaluate the effectiveness of the matrix. Secondly, after getting a plausible estimation, the obvious outliers are eliminated from the correspondences set. This process can enhance the inliers' ratio in the remaining correspondences set, which will accelerate the sample progress. We call our method as outlier elimination based RANSAC (OE-RANSAC). Experimental results both from synthetic and real data have testified the efficiency of OE-RANSAC.
为了加快基本矩阵估计的RANSAC过程,提出了对RANSAC的两个特殊修正。首先,在验证阶段,不使用对应关系来验证假设,而是直接使用估计的基本矩阵的奇异值来评估矩阵的有效性。其次,在得到似是而非的估计后,从对应集中剔除明显的异常值。这个过程可以提高内层在剩余对应集中的比例,从而加快样本的进度。我们将这种方法称为基于离群值消除的RANSAC (OE-RANSAC)。合成数据和实际数据的实验结果都证明了OE-RANSAC的有效性。
{"title":"Outliers Elimination Based Ransac for Fundamental Matrix Estimation","authors":"Shuqiang Yang, Biao Li","doi":"10.1109/ICVRV.2013.63","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.63","url":null,"abstract":"To accelerate the RANSAC process for fundamental matrix estimation, two special modifications about RANSAC are proposed. Firstly, in the verification stage, not the correspondences are used to verify the hypothesis but the singular values of estimated fundamental matrix are directly used to evaluate the effectiveness of the matrix. Secondly, after getting a plausible estimation, the obvious outliers are eliminated from the correspondences set. This process can enhance the inliers' ratio in the remaining correspondences set, which will accelerate the sample progress. We call our method as outlier elimination based RANSAC (OE-RANSAC). Experimental results both from synthetic and real data have testified the efficiency of OE-RANSAC.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129410184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Edge-Guided Depth Map Resampling for HEVC 3D Video Coding 边缘引导深度图重采样HEVC 3D视频编码
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.29
Yi Yang, Jiangbin Zheng
Multi view video plus depth (MVD) format is considered to be essential in the next generation three-dimensional television (3DTV). The compression of this format is crucial. Depth images are featured by large homogeneous areas and sharp edges between objects. It has been observed that efficient compression can be achieved by a down-up sampling procedure as pre- and post-processing of video coding. We propose an edge-guided depth map re sampling method based on the above scheme, combine the edge information of both texture and depth images for edge preserving, extend image gradient domain reconstruction concept to depth up-sampling to form a linear equation system to find least square solution. Experimental results show that proposed method improves both depth map coding efficiency and synthesized view quality. Additionally, the up-scaling method can be used for depth super-resolution reconstruction of depth data captured by depth sensors, like Kinect or TOF cameras.
多视点视频加深度(MVD)格式被认为是下一代三维电视(3DTV)必不可少的技术。这种格式的压缩是至关重要的。深度图像的特点是物体之间有较大的均匀区域和尖锐的边缘。通过对视频编码进行预处理和后处理,可以实现有效的压缩。在此基础上提出了一种边缘引导的深度图重采样方法,结合纹理图像和深度图像的边缘信息进行边缘保持,将图像梯度域重构的概念扩展到深度上采样,形成线性方程组求解最小二乘解。实验结果表明,该方法提高了深度图编码效率和合成视图质量。此外,该方法还可用于深度传感器(如Kinect或TOF相机)捕获的深度数据的深度超分辨率重建。
{"title":"Edge-Guided Depth Map Resampling for HEVC 3D Video Coding","authors":"Yi Yang, Jiangbin Zheng","doi":"10.1109/ICVRV.2013.29","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.29","url":null,"abstract":"Multi view video plus depth (MVD) format is considered to be essential in the next generation three-dimensional television (3DTV). The compression of this format is crucial. Depth images are featured by large homogeneous areas and sharp edges between objects. It has been observed that efficient compression can be achieved by a down-up sampling procedure as pre- and post-processing of video coding. We propose an edge-guided depth map re sampling method based on the above scheme, combine the edge information of both texture and depth images for edge preserving, extend image gradient domain reconstruction concept to depth up-sampling to form a linear equation system to find least square solution. Experimental results show that proposed method improves both depth map coding efficiency and synthesized view quality. Additionally, the up-scaling method can be used for depth super-resolution reconstruction of depth data captured by depth sensors, like Kinect or TOF cameras.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127515684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Saliency-Guided Luminance Enhancement for 3D Shape Depiction 显著性引导亮度增强3D形状描绘
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.10
W. Hao, Yinghui Wang
We present a novel saliency-guided shading scheme for 3D shape depiction incorporating the mesh saliency into luminance enhancement in this paper. Using a distanced-based mesh saliency computation, we propose a new perceptual-saliency measure which can depict surface salient regions. Due to the visual salient regions, we emphasize the details and the overall shape of models by locally enhancing the high-frequency of vertex luminance. The enhanced strength is not controlled by user whereas it is determined by the surface shapes. Experimental results demonstrate that our method displays satisfying results obtained with Phong shading, Gooch shading and cartoon shading. Compared to previous techniques, our approach can effectively improve shape depiction without impairing the desired appearance.
在本文中,我们提出了一种新的显著性指导的3D形状描绘的阴影方案,将网格显著性纳入亮度增强。利用基于距离的网格显著性计算,我们提出了一种新的感知显著性度量,可以描述表面显著区域。由于存在视觉显著区域,我们通过局部增强顶点亮度的高频来强调模型的细节和整体形状。增强强度不受用户控制,而是由表面形状决定。实验结果表明,该方法对Phong着色、Gooch着色和卡通着色均能取得满意的效果。与以前的技术相比,我们的方法可以有效地改善形状描绘,而不损害期望的外观。
{"title":"Saliency-Guided Luminance Enhancement for 3D Shape Depiction","authors":"W. Hao, Yinghui Wang","doi":"10.1109/ICVRV.2013.10","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.10","url":null,"abstract":"We present a novel saliency-guided shading scheme for 3D shape depiction incorporating the mesh saliency into luminance enhancement in this paper. Using a distanced-based mesh saliency computation, we propose a new perceptual-saliency measure which can depict surface salient regions. Due to the visual salient regions, we emphasize the details and the overall shape of models by locally enhancing the high-frequency of vertex luminance. The enhanced strength is not controlled by user whereas it is determined by the surface shapes. Experimental results demonstrate that our method displays satisfying results obtained with Phong shading, Gooch shading and cartoon shading. Compared to previous techniques, our approach can effectively improve shape depiction without impairing the desired appearance.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132510404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Potential Field Based Cooperative Particle Filter for Multi-View Multi-Object Tracking 基于人工势场的协同粒子滤波多视点多目标跟踪
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.20
Xiao-min Tong, Yanning Zhang, Tao Yang
To continuously track the multiple occluded object in the crowded scene, we propose a new multi-view multi-object tracking method basing on artificial potential field and cooperative particle filter in which we combine the bottom-up and top-down tracking methods for better tracking results. After obtaining the accurate occupancy map through the multi-planar consistent constraint, we predict the tracking probability map via cooperation among multiple particle filters. The main point is that multiple particle filters' cooperation is considered as the path planning and particles' random shifting is guided by the artificial potential field. Comparative experimental results with the traditional blob-detection-tracking algorithm demonstrate the effectiveness and robustness of our method.
为了对拥挤场景中多个被遮挡物体进行连续跟踪,提出了一种基于人工势场和协同粒子滤波的多视图多目标跟踪方法,将自底向上和自顶向下的跟踪方法相结合,以获得更好的跟踪效果。通过多平面一致性约束获得准确的占用图后,通过多个粒子滤波器之间的合作预测跟踪概率图。其重点是将多个粒子滤波器的协同作为路径规划,粒子的随机移动由人工势场引导。与传统斑点检测跟踪算法的对比实验结果证明了该方法的有效性和鲁棒性。
{"title":"Artificial Potential Field Based Cooperative Particle Filter for Multi-View Multi-Object Tracking","authors":"Xiao-min Tong, Yanning Zhang, Tao Yang","doi":"10.1109/ICVRV.2013.20","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.20","url":null,"abstract":"To continuously track the multiple occluded object in the crowded scene, we propose a new multi-view multi-object tracking method basing on artificial potential field and cooperative particle filter in which we combine the bottom-up and top-down tracking methods for better tracking results. After obtaining the accurate occupancy map through the multi-planar consistent constraint, we predict the tracking probability map via cooperation among multiple particle filters. The main point is that multiple particle filters' cooperation is considered as the path planning and particles' random shifting is guided by the artificial potential field. Comparative experimental results with the traditional blob-detection-tracking algorithm demonstrate the effectiveness and robustness of our method.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121149454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
3D-Realtime-Monitor Syestem for Lunar Rover 月球车三维实时监控系统
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.51
P. Zhang, Guopeng Li, Jianjun Liu, X. Ren, Xingye Gao
The 3D-Real time-Monitor system is a real-time virtual reality system and it is consist of data sever and rendering client. As a real-time virtual reality system, 3D-Real time-Monitor is driven by the runtime telemeasuring data and it is integrated with kinematics and dynamics model of rover as well as real lunar surface terrain mode. The process of telemeasuring data is described in detail. Methods of modeling which are proposed in this paper include constructing lunar surface, constructing 3D model of lander and rover, building up kinematic model of rover body and building up wheel-terrain interaction mode. Photogrammetry technique and the remote sensing information are used to generate the terrain model of lunar surface. According to the implementation result, 3D-Real time-Monitor system is an effective assist system for making exploration plan and monitoring the status of rover.
三维实时监控系统是一个实时虚拟现实系统,它由数据服务器和呈现客户端组成。3D-Real - time-Monitor是一种实时虚拟现实系统,以运行时遥测数据为驱动,结合月球车的运动学和动力学模型以及真实的月球表面地形模式。详细描述了遥测数据的处理过程。本文提出的建模方法包括:构建月球表面、构建着陆器和月球车的三维模型、建立月球车体的运动学模型和建立车轮-地形相互作用模型。利用摄影测量技术和遥感信息生成月球表面地形模型。实施结果表明,3D-Real - time-Monitor系统是制定探测计划和监测月球车状态的有效辅助系统。
{"title":"3D-Realtime-Monitor Syestem for Lunar Rover","authors":"P. Zhang, Guopeng Li, Jianjun Liu, X. Ren, Xingye Gao","doi":"10.1109/ICVRV.2013.51","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.51","url":null,"abstract":"The 3D-Real time-Monitor system is a real-time virtual reality system and it is consist of data sever and rendering client. As a real-time virtual reality system, 3D-Real time-Monitor is driven by the runtime telemeasuring data and it is integrated with kinematics and dynamics model of rover as well as real lunar surface terrain mode. The process of telemeasuring data is described in detail. Methods of modeling which are proposed in this paper include constructing lunar surface, constructing 3D model of lander and rover, building up kinematic model of rover body and building up wheel-terrain interaction mode. Photogrammetry technique and the remote sensing information are used to generate the terrain model of lunar surface. According to the implementation result, 3D-Real time-Monitor system is an effective assist system for making exploration plan and monitoring the status of rover.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"112 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116256426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Interactive Warping Method for Multi-channel VR Projection Display Systems with Quadric Surface Screens 二次曲面屏幕多通道VR投影显示系统的交互翘曲方法
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.9
Fang Sun, Weiliang Meng
In this paper, we present a practical non-camera-based interactive warping method for multi-channel immersive VR projection display systems with quadric surface screens. Instead of using one or multiple cameras as most previous methods did, we employ a commercial theodolite and a mouse to interactively calibrate each projector on site. By taking advantage of the nature of shape of the curved screen, we are able to perform fast, robust projector calibration and compute the warping map for each projector by taking other system information into account, i.e., position/frustum of the designed eye point (DEP). Compared with camera-based solutions, our method is accurate, cost-effective, simple to operate, and can reduce system set-up time and complexity efficiently. The feasibility of our method has been verified by many real site installations.
在本文中,我们提出了一种实用的非基于相机的交互式扭曲方法,用于具有二次曲面屏幕的多通道沉浸式VR投影显示系统。我们不像以前那样使用一个或多个摄像机,而是使用商用经纬仪和鼠标来交互式校准现场的每个投影仪。通过利用曲面屏幕的形状特性,我们能够执行快速、稳健的投影仪校准,并通过考虑其他系统信息(即设计的眼点(DEP)的位置/视台)来计算每个投影仪的扭曲图。与基于摄像机的解决方案相比,我们的方法准确、经济、操作简单,可以有效地减少系统设置时间和复杂性。我们的方法的可行性已通过许多实际现场安装得到验证。
{"title":"An Interactive Warping Method for Multi-channel VR Projection Display Systems with Quadric Surface Screens","authors":"Fang Sun, Weiliang Meng","doi":"10.1109/ICVRV.2013.9","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.9","url":null,"abstract":"In this paper, we present a practical non-camera-based interactive warping method for multi-channel immersive VR projection display systems with quadric surface screens. Instead of using one or multiple cameras as most previous methods did, we employ a commercial theodolite and a mouse to interactively calibrate each projector on site. By taking advantage of the nature of shape of the curved screen, we are able to perform fast, robust projector calibration and compute the warping map for each projector by taking other system information into account, i.e., position/frustum of the designed eye point (DEP). Compared with camera-based solutions, our method is accurate, cost-effective, simple to operate, and can reduce system set-up time and complexity efficiently. The feasibility of our method has been verified by many real site installations.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116355679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2013 International Conference on Virtual Reality and Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1