首页 > 最新文献

2013 International Conference on Virtual Reality and Visualization最新文献

英文 中文
Anaglyph 3D Stereoscopic Visualization of 2D Video Based on Fundamental Matrix 基于基本矩阵的二维视频浮雕三维立体可视化
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.59
Zhihan Lu, S. Réhman, Muhammad Sikandar Lal Khan, Haibo Li
In this paper, we propose a simple Anaglyph 3D stereo generation algorithm from 2D video sequence with monocular camera. In our novel approach we employ camera pose estimation method to directly generate stereoscopic 3D from 2D video without building depth map explicitly. Our cost effective method is suitable for arbitrary real-world video sequence and produces smooth results. We use image stitching based on plane correspondence using fundamental matrix. To this end we also demonstrate that correspondence plane image stitching based on Homography matrix only cannot generate better result. Furthermore, we utilize the structure from motion (with fundamental matrix) based reconstructed camera pose model to accomplish visual anaglyph 3D illusion. The proposed approach demonstrates a very good performance for most of the video sequences.
本文提出了一种简单的基于单目摄像机的二维视频序列的浮雕三维立体生成算法。在我们的新方法中,我们采用摄像机姿态估计方法直接从2D视频生成立体3D,而不需要显式地构建深度图。该方法适用于任意现实世界的视频序列,并产生平滑的结果。我们利用基本矩阵进行基于平面对应的图像拼接。为此,我们还证明了仅基于单应性矩阵的对应平面图像拼接不能产生更好的结果。在此基础上,利用基于运动结构(带基本矩阵)的重构相机姿态模型,实现了视觉上的立体错觉。该方法对大多数视频序列都有很好的处理效果。
{"title":"Anaglyph 3D Stereoscopic Visualization of 2D Video Based on Fundamental Matrix","authors":"Zhihan Lu, S. Réhman, Muhammad Sikandar Lal Khan, Haibo Li","doi":"10.1109/ICVRV.2013.59","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.59","url":null,"abstract":"In this paper, we propose a simple Anaglyph 3D stereo generation algorithm from 2D video sequence with monocular camera. In our novel approach we employ camera pose estimation method to directly generate stereoscopic 3D from 2D video without building depth map explicitly. Our cost effective method is suitable for arbitrary real-world video sequence and produces smooth results. We use image stitching based on plane correspondence using fundamental matrix. To this end we also demonstrate that correspondence plane image stitching based on Homography matrix only cannot generate better result. Furthermore, we utilize the structure from motion (with fundamental matrix) based reconstructed camera pose model to accomplish visual anaglyph 3D illusion. The proposed approach demonstrates a very good performance for most of the video sequences.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116907484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Laser Sheet Scanning Based Smoke Acquisition and Reconstruction 基于激光扫描的烟雾采集与重建
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.15
Xin Gao, Yong Hu, Qing Zuo, Yue Qi
This paper develops a laser sheet scanning based technique for capturing and reconstructing sequential volumetric models of smoke. First, a dedicated setup is introduced as the laser sheet illuminator in horizontal scanning. To achieve an accurate acquisition, a signal synchronized scheme is added between the galvanometer and the high-speed camera. Then, with a laser sheet sweeping through the volume repeatedly, the illuminated smoke slices are captured. Each sweep of the laser records a near-simultaneous smoke density field. In next reconstruction procedure, through camera and laser calibrations, 3D real positions of the pixels of captured images is calculated. Finally, these irregular smoke density fields are re sampled by a 3D original Kriging interpolation algorithm and are reconstructed to regular smoke volumetric models. In experimental results, the fidelity of visualized smoke volumetric models reconstructed by our smoke modeling method demonstrates that our approach can make a good effect on realistic smoke modeling.
本文提出了一种基于激光扫描的烟雾序列体积模型捕获和重建技术。首先,介绍了一种专用装置作为水平扫描中的激光片照明器。为了实现准确的采集,在振镜和高速摄像机之间增加了信号同步方案。然后,用激光片反复扫过体积,被照亮的烟雾片被捕获。每次激光扫描都会记录下几乎同时发生的烟雾密度场。在接下来的重建过程中,通过相机和激光校准,计算捕获图像像素的三维真实位置。最后,利用三维Kriging插值算法对这些不规则的烟雾密度场进行重新采样,重构成规则的烟雾体积模型。实验结果表明,该方法重建的可视化烟雾体积模型保真度较高,可以实现逼真的烟雾建模。
{"title":"Laser Sheet Scanning Based Smoke Acquisition and Reconstruction","authors":"Xin Gao, Yong Hu, Qing Zuo, Yue Qi","doi":"10.1109/ICVRV.2013.15","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.15","url":null,"abstract":"This paper develops a laser sheet scanning based technique for capturing and reconstructing sequential volumetric models of smoke. First, a dedicated setup is introduced as the laser sheet illuminator in horizontal scanning. To achieve an accurate acquisition, a signal synchronized scheme is added between the galvanometer and the high-speed camera. Then, with a laser sheet sweeping through the volume repeatedly, the illuminated smoke slices are captured. Each sweep of the laser records a near-simultaneous smoke density field. In next reconstruction procedure, through camera and laser calibrations, 3D real positions of the pixels of captured images is calculated. Finally, these irregular smoke density fields are re sampled by a 3D original Kriging interpolation algorithm and are reconstructed to regular smoke volumetric models. In experimental results, the fidelity of visualized smoke volumetric models reconstructed by our smoke modeling method demonstrates that our approach can make a good effect on realistic smoke modeling.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Divide and Conquer Ray Tracing Algorithm Based on BVH Partition 基于BVH分区的分治光线追踪算法
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.16
Wu Zhefu, Yu Hong, Chen Bin
A new fast divide and conquer ray tracing algorithm based on BVH partition which can remove unnecessary rays in subspace is proposed to resolve the problem that bounding boxes not tightly surround the primitives by space divide scheme which then increased unnecessary rays in subspace. Its core idea is that using Bin based BVH construction algorithm to partition primitives into two parts then distributing primitives and rays into corresponding subspace using stream filter. If the number of rays and primitives which intersect a sub space meet some limit condition, the primitives and rays in the subspace then begin basic ray tracing. A comparison between divide conquer ray tracing algorithm using BVH and using space divide schemes such as Kd-tree, grid shows our method can reduce computing with unnecessary rays in subspace substantially and lead to faster performance significantly.
针对空间分割导致边界框不紧密包围原语而增加子空间中不必要射线的问题,提出了一种新的基于BVH分割的快速分治射线跟踪算法,该算法可以去除子空间中不必要的射线。其核心思想是使用基于Bin的BVH构造算法将原语划分为两部分,然后使用流过滤器将原语和射线分布到相应的子空间中。如果与子空间相交的射线和原元的数量满足一定的极限条件,则子空间中的原元和射线开始基本射线追踪。将基于BVH的分治光线跟踪算法与基于Kd-tree、grid等空间分治方案的光线跟踪算法进行了比较,结果表明,我们的算法大大减少了子空间中不必要光线的计算,显著提高了算法的性能。
{"title":"Divide and Conquer Ray Tracing Algorithm Based on BVH Partition","authors":"Wu Zhefu, Yu Hong, Chen Bin","doi":"10.1109/ICVRV.2013.16","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.16","url":null,"abstract":"A new fast divide and conquer ray tracing algorithm based on BVH partition which can remove unnecessary rays in subspace is proposed to resolve the problem that bounding boxes not tightly surround the primitives by space divide scheme which then increased unnecessary rays in subspace. Its core idea is that using Bin based BVH construction algorithm to partition primitives into two parts then distributing primitives and rays into corresponding subspace using stream filter. If the number of rays and primitives which intersect a sub space meet some limit condition, the primitives and rays in the subspace then begin basic ray tracing. A comparison between divide conquer ray tracing algorithm using BVH and using space divide schemes such as Kd-tree, grid shows our method can reduce computing with unnecessary rays in subspace substantially and lead to faster performance significantly.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"321 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127506877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Novel Depth Recovery Approach from Multi-View Stereo Based Focusing 一种基于多视点立体聚焦的深度恢复新方法
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.34
Zhaolin Xiao, Heng Yang, Qing Wang, Guoqing Zhou
In this paper, we propose a novel depth recovery method from multi-view stereo based focusing. Inspired by the 4D light field theory, we discover the relationship between classical multi-view stereo (MVS) and depth from focus (DFF) methods and concern about different frequency distribution in 2D light field space. Then we propose a way to separate the depth recovery into two steps. At the first stage, we choose some depth candidates using existing multi-view stereo method. At the second phase, the depth from focusing algorithm is employed to determine the final depth. As well known, multi-view stereo and depth from focus need different kinds of input images, which can not be acquired at the same time by using traditional imaging system. We have addressed this issue by using a camera array system and synthetic aperture photography. Both multi-view images and distinct defocus blur images can be captured at the same time. Experimental results have shown that our proposed method can take advantages of MVS and DFF and the recovered depth is better than traditional methods.
本文提出了一种基于多视点立体聚焦的深度恢复方法。受四维光场理论的启发,我们发现了经典的多视点立体(MVS)与聚焦深度(DFF)方法之间的关系,并关注了二维光场空间中不同频率的分布。然后,我们提出了一种将深度恢复分为两个步骤的方法。首先,利用现有的多视点立体方法选取深度候选点;第二阶段,采用聚焦深度算法确定最终深度。众所周知,多视角立体和焦点深度需要不同类型的输入图像,而传统成像系统无法同时获取这些图像。我们通过使用相机阵列系统和合成光圈摄影解决了这个问题。可以同时捕获多视图图像和明显的散焦模糊图像。实验结果表明,该方法可以充分利用MVS和DFF的优点,且恢复深度优于传统方法。
{"title":"A Novel Depth Recovery Approach from Multi-View Stereo Based Focusing","authors":"Zhaolin Xiao, Heng Yang, Qing Wang, Guoqing Zhou","doi":"10.1109/ICVRV.2013.34","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.34","url":null,"abstract":"In this paper, we propose a novel depth recovery method from multi-view stereo based focusing. Inspired by the 4D light field theory, we discover the relationship between classical multi-view stereo (MVS) and depth from focus (DFF) methods and concern about different frequency distribution in 2D light field space. Then we propose a way to separate the depth recovery into two steps. At the first stage, we choose some depth candidates using existing multi-view stereo method. At the second phase, the depth from focusing algorithm is employed to determine the final depth. As well known, multi-view stereo and depth from focus need different kinds of input images, which can not be acquired at the same time by using traditional imaging system. We have addressed this issue by using a camera array system and synthetic aperture photography. Both multi-view images and distinct defocus blur images can be captured at the same time. Experimental results have shown that our proposed method can take advantages of MVS and DFF and the recovered depth is better than traditional methods.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114382616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Outliers Elimination Based Ransac for Fundamental Matrix Estimation 基于离群值消除的Ransac基本矩阵估计
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.63
Shuqiang Yang, Biao Li
To accelerate the RANSAC process for fundamental matrix estimation, two special modifications about RANSAC are proposed. Firstly, in the verification stage, not the correspondences are used to verify the hypothesis but the singular values of estimated fundamental matrix are directly used to evaluate the effectiveness of the matrix. Secondly, after getting a plausible estimation, the obvious outliers are eliminated from the correspondences set. This process can enhance the inliers' ratio in the remaining correspondences set, which will accelerate the sample progress. We call our method as outlier elimination based RANSAC (OE-RANSAC). Experimental results both from synthetic and real data have testified the efficiency of OE-RANSAC.
为了加快基本矩阵估计的RANSAC过程,提出了对RANSAC的两个特殊修正。首先,在验证阶段,不使用对应关系来验证假设,而是直接使用估计的基本矩阵的奇异值来评估矩阵的有效性。其次,在得到似是而非的估计后,从对应集中剔除明显的异常值。这个过程可以提高内层在剩余对应集中的比例,从而加快样本的进度。我们将这种方法称为基于离群值消除的RANSAC (OE-RANSAC)。合成数据和实际数据的实验结果都证明了OE-RANSAC的有效性。
{"title":"Outliers Elimination Based Ransac for Fundamental Matrix Estimation","authors":"Shuqiang Yang, Biao Li","doi":"10.1109/ICVRV.2013.63","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.63","url":null,"abstract":"To accelerate the RANSAC process for fundamental matrix estimation, two special modifications about RANSAC are proposed. Firstly, in the verification stage, not the correspondences are used to verify the hypothesis but the singular values of estimated fundamental matrix are directly used to evaluate the effectiveness of the matrix. Secondly, after getting a plausible estimation, the obvious outliers are eliminated from the correspondences set. This process can enhance the inliers' ratio in the remaining correspondences set, which will accelerate the sample progress. We call our method as outlier elimination based RANSAC (OE-RANSAC). Experimental results both from synthetic and real data have testified the efficiency of OE-RANSAC.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129410184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Edge-Guided Depth Map Resampling for HEVC 3D Video Coding 边缘引导深度图重采样HEVC 3D视频编码
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.29
Yi Yang, Jiangbin Zheng
Multi view video plus depth (MVD) format is considered to be essential in the next generation three-dimensional television (3DTV). The compression of this format is crucial. Depth images are featured by large homogeneous areas and sharp edges between objects. It has been observed that efficient compression can be achieved by a down-up sampling procedure as pre- and post-processing of video coding. We propose an edge-guided depth map re sampling method based on the above scheme, combine the edge information of both texture and depth images for edge preserving, extend image gradient domain reconstruction concept to depth up-sampling to form a linear equation system to find least square solution. Experimental results show that proposed method improves both depth map coding efficiency and synthesized view quality. Additionally, the up-scaling method can be used for depth super-resolution reconstruction of depth data captured by depth sensors, like Kinect or TOF cameras.
多视点视频加深度(MVD)格式被认为是下一代三维电视(3DTV)必不可少的技术。这种格式的压缩是至关重要的。深度图像的特点是物体之间有较大的均匀区域和尖锐的边缘。通过对视频编码进行预处理和后处理,可以实现有效的压缩。在此基础上提出了一种边缘引导的深度图重采样方法,结合纹理图像和深度图像的边缘信息进行边缘保持,将图像梯度域重构的概念扩展到深度上采样,形成线性方程组求解最小二乘解。实验结果表明,该方法提高了深度图编码效率和合成视图质量。此外,该方法还可用于深度传感器(如Kinect或TOF相机)捕获的深度数据的深度超分辨率重建。
{"title":"Edge-Guided Depth Map Resampling for HEVC 3D Video Coding","authors":"Yi Yang, Jiangbin Zheng","doi":"10.1109/ICVRV.2013.29","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.29","url":null,"abstract":"Multi view video plus depth (MVD) format is considered to be essential in the next generation three-dimensional television (3DTV). The compression of this format is crucial. Depth images are featured by large homogeneous areas and sharp edges between objects. It has been observed that efficient compression can be achieved by a down-up sampling procedure as pre- and post-processing of video coding. We propose an edge-guided depth map re sampling method based on the above scheme, combine the edge information of both texture and depth images for edge preserving, extend image gradient domain reconstruction concept to depth up-sampling to form a linear equation system to find least square solution. Experimental results show that proposed method improves both depth map coding efficiency and synthesized view quality. Additionally, the up-scaling method can be used for depth super-resolution reconstruction of depth data captured by depth sensors, like Kinect or TOF cameras.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127515684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Saliency-Guided Luminance Enhancement for 3D Shape Depiction 显著性引导亮度增强3D形状描绘
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.10
W. Hao, Yinghui Wang
We present a novel saliency-guided shading scheme for 3D shape depiction incorporating the mesh saliency into luminance enhancement in this paper. Using a distanced-based mesh saliency computation, we propose a new perceptual-saliency measure which can depict surface salient regions. Due to the visual salient regions, we emphasize the details and the overall shape of models by locally enhancing the high-frequency of vertex luminance. The enhanced strength is not controlled by user whereas it is determined by the surface shapes. Experimental results demonstrate that our method displays satisfying results obtained with Phong shading, Gooch shading and cartoon shading. Compared to previous techniques, our approach can effectively improve shape depiction without impairing the desired appearance.
在本文中,我们提出了一种新的显著性指导的3D形状描绘的阴影方案,将网格显著性纳入亮度增强。利用基于距离的网格显著性计算,我们提出了一种新的感知显著性度量,可以描述表面显著区域。由于存在视觉显著区域,我们通过局部增强顶点亮度的高频来强调模型的细节和整体形状。增强强度不受用户控制,而是由表面形状决定。实验结果表明,该方法对Phong着色、Gooch着色和卡通着色均能取得满意的效果。与以前的技术相比,我们的方法可以有效地改善形状描绘,而不损害期望的外观。
{"title":"Saliency-Guided Luminance Enhancement for 3D Shape Depiction","authors":"W. Hao, Yinghui Wang","doi":"10.1109/ICVRV.2013.10","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.10","url":null,"abstract":"We present a novel saliency-guided shading scheme for 3D shape depiction incorporating the mesh saliency into luminance enhancement in this paper. Using a distanced-based mesh saliency computation, we propose a new perceptual-saliency measure which can depict surface salient regions. Due to the visual salient regions, we emphasize the details and the overall shape of models by locally enhancing the high-frequency of vertex luminance. The enhanced strength is not controlled by user whereas it is determined by the surface shapes. Experimental results demonstrate that our method displays satisfying results obtained with Phong shading, Gooch shading and cartoon shading. Compared to previous techniques, our approach can effectively improve shape depiction without impairing the desired appearance.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132510404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Potential Field Based Cooperative Particle Filter for Multi-View Multi-Object Tracking 基于人工势场的协同粒子滤波多视点多目标跟踪
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.20
Xiao-min Tong, Yanning Zhang, Tao Yang
To continuously track the multiple occluded object in the crowded scene, we propose a new multi-view multi-object tracking method basing on artificial potential field and cooperative particle filter in which we combine the bottom-up and top-down tracking methods for better tracking results. After obtaining the accurate occupancy map through the multi-planar consistent constraint, we predict the tracking probability map via cooperation among multiple particle filters. The main point is that multiple particle filters' cooperation is considered as the path planning and particles' random shifting is guided by the artificial potential field. Comparative experimental results with the traditional blob-detection-tracking algorithm demonstrate the effectiveness and robustness of our method.
为了对拥挤场景中多个被遮挡物体进行连续跟踪,提出了一种基于人工势场和协同粒子滤波的多视图多目标跟踪方法,将自底向上和自顶向下的跟踪方法相结合,以获得更好的跟踪效果。通过多平面一致性约束获得准确的占用图后,通过多个粒子滤波器之间的合作预测跟踪概率图。其重点是将多个粒子滤波器的协同作为路径规划,粒子的随机移动由人工势场引导。与传统斑点检测跟踪算法的对比实验结果证明了该方法的有效性和鲁棒性。
{"title":"Artificial Potential Field Based Cooperative Particle Filter for Multi-View Multi-Object Tracking","authors":"Xiao-min Tong, Yanning Zhang, Tao Yang","doi":"10.1109/ICVRV.2013.20","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.20","url":null,"abstract":"To continuously track the multiple occluded object in the crowded scene, we propose a new multi-view multi-object tracking method basing on artificial potential field and cooperative particle filter in which we combine the bottom-up and top-down tracking methods for better tracking results. After obtaining the accurate occupancy map through the multi-planar consistent constraint, we predict the tracking probability map via cooperation among multiple particle filters. The main point is that multiple particle filters' cooperation is considered as the path planning and particles' random shifting is guided by the artificial potential field. Comparative experimental results with the traditional blob-detection-tracking algorithm demonstrate the effectiveness and robustness of our method.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121149454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
WebVRGIS: WebGIS Based Interactive Online 3D Virtual Community WebVRGIS:基于WebGIS的交互式在线三维虚拟社区
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.23
Zhihan Lu, S. Réhman, Ge Chen
In this paper we present a WebVRGIS based Interactive On line 3D Virtual Community which is achieved based on WebGIS technology and web VR technology. It is Multi-Dimensional(MD) web geographic information system (WebGIS) based 3D interactive on line virtual community which is a virtual real-time 3D communication systems and web systems development platform. It is capable of running on a variety of browsers. In this work, four key issues are studied: (1) Multi-source MD geographical data fusion of the WebGIS, (2) scene combination with 3D avatar, (3) massive data network dispatch, and (4) multi-user avatar real-time interactive. Our system is divided into three modules: data preprocessing, background management and front end user interaction. The core of the front interaction module is packaged in the MD map expression engine 3GWebMapper and the free plug-in network 3D rendering engine WebFlashVR. We have evaluated the robustness of our system on three campus of Ocean University of China(OUC) as a testing base. The results shows high efficiency, easy to use and robustness of our system.
本文提出了一个基于WebVRGIS的交互式在线三维虚拟社区,该社区是基于WebGIS技术和web VR技术实现的。基于多维网络地理信息系统(WebGIS)的三维交互式在线虚拟社区是一个虚拟的实时三维通信系统和web系统开发平台。它能够在各种浏览器上运行。本文主要研究了四个关键问题:(1)WebGIS的多源MD地理数据融合;(2)场景与三维化身的结合;(3)海量数据网络调度;(4)多用户化身实时交互。本系统分为数据预处理、后台管理和前端用户交互三个模块。前端交互模块的核心封装在MD地图表达引擎3GWebMapper和免费插件网络3D渲染引擎WebFlashVR中。以中国海洋大学三个校区为测试基地,对系统的鲁棒性进行了评估。结果表明,该系统效率高、操作简单、鲁棒性好。
{"title":"WebVRGIS: WebGIS Based Interactive Online 3D Virtual Community","authors":"Zhihan Lu, S. Réhman, Ge Chen","doi":"10.1109/ICVRV.2013.23","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.23","url":null,"abstract":"In this paper we present a WebVRGIS based Interactive On line 3D Virtual Community which is achieved based on WebGIS technology and web VR technology. It is Multi-Dimensional(MD) web geographic information system (WebGIS) based 3D interactive on line virtual community which is a virtual real-time 3D communication systems and web systems development platform. It is capable of running on a variety of browsers. In this work, four key issues are studied: (1) Multi-source MD geographical data fusion of the WebGIS, (2) scene combination with 3D avatar, (3) massive data network dispatch, and (4) multi-user avatar real-time interactive. Our system is divided into three modules: data preprocessing, background management and front end user interaction. The core of the front interaction module is packaged in the MD map expression engine 3GWebMapper and the free plug-in network 3D rendering engine WebFlashVR. We have evaluated the robustness of our system on three campus of Ocean University of China(OUC) as a testing base. The results shows high efficiency, easy to use and robustness of our system.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124388615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Phase Estimation Based Blind Deconvolution for Turbulence Degraded Images 基于相位估计的湍流退化图像盲反卷积
Pub Date : 2013-09-14 DOI: 10.1109/ICVRV.2013.53
Afeng Yang, Min Lu, Shuhua Teng, Jixiang Sun
The resolution of space object images observed by ground-based telescope is greatly limited due to the influence of atmospheric turbulence. An improved blind deconvolution method is presented to enhance the performance of turbulence degraded images restoration. Firstly, a mixed noise model based blind deconvolution cost function is deduced under Gaussian and Poisson noise contamination of measurement. Then, point spread function (PSF) is described by wavefront phase aberrations in the pupil plane according to Fourier Optics theory. In this way, the estimation of PSF is generated from the wavefront phase parameterization instead of pixel domain value. Lastly, the cost function is converted from constrained optimization problem to non-constrained optimization problem by means of parameterization of object image and PSF. Experimental results show that the proposed method can recover high quality image from turbulence degraded images effectively.
由于大气湍流的影响,地面望远镜观测空间物体图像的分辨率受到很大限制。为了提高湍流退化图像的恢复性能,提出了一种改进的盲反卷积方法。首先,在测量的高斯噪声和泊松噪声污染下,推导了基于混合噪声模型的盲反卷积代价函数。然后根据傅里叶光学理论,用瞳孔平面的波前相位像差来描述点扩散函数。这样,由波前相位参数化而不是像素域值来产生PSF的估计。最后,通过对目标图像和PSF的参数化,将代价函数从约束优化问题转化为无约束优化问题。实验结果表明,该方法可以有效地从湍流退化图像中恢复高质量图像。
{"title":"Phase Estimation Based Blind Deconvolution for Turbulence Degraded Images","authors":"Afeng Yang, Min Lu, Shuhua Teng, Jixiang Sun","doi":"10.1109/ICVRV.2013.53","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.53","url":null,"abstract":"The resolution of space object images observed by ground-based telescope is greatly limited due to the influence of atmospheric turbulence. An improved blind deconvolution method is presented to enhance the performance of turbulence degraded images restoration. Firstly, a mixed noise model based blind deconvolution cost function is deduced under Gaussian and Poisson noise contamination of measurement. Then, point spread function (PSF) is described by wavefront phase aberrations in the pupil plane according to Fourier Optics theory. In this way, the estimation of PSF is generated from the wavefront phase parameterization instead of pixel domain value. Lastly, the cost function is converted from constrained optimization problem to non-constrained optimization problem by means of parameterization of object image and PSF. Experimental results show that the proposed method can recover high quality image from turbulence degraded images effectively.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131510924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2013 International Conference on Virtual Reality and Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1