首页 > 最新文献

2014 International Conference on 3D Imaging (IC3D)最新文献

英文 中文
No-reference perceptual blur metric for stereoscopic images 立体图像的无参考感知模糊度量
Pub Date : 2014-12-09 DOI: 10.1109/IC3D.2014.7032601
Sid Ahmed Fezza, M. Larabi
In this paper, we propose a no-reference perceptual blur metric for 3D stereoscopic images. The proposed approach relies on computing perceptual local blurriness map for each image of the stereo pair. To take into account the disparity/depth masking effect, we modulate the obtained perceptual score at each position of the blurriness maps according to its location in the scene. Under the assumption that, in case of asymmetric stereoscopic image quality, 3D perception mechanisms place more emphasis on the view providing the most important and contrasted information, the two derived local blurriness maps are combined using weighting factors based on local information content. Thanks to the inclusion of those psychophysical findings, the proposed metric handles efficiently symmetric as well as asymmetric distortions. Experimental results show that the proposed metric correlates better with human perception than state-of-the-art metrics.
本文提出了一种用于三维立体图像的无参考感知模糊度量。该方法依赖于计算立体图像对每个图像的感知局部模糊映射。为了考虑视差/深度掩蔽效果,我们根据其在场景中的位置在模糊贴图的每个位置调制获得的感知分数。假设在立体图像质量不对称的情况下,三维感知机制更侧重于提供最重要和对比信息的视图,基于局部信息含量,使用加权因子将导出的两个局部模糊度图组合在一起。由于包含了这些心理物理学的发现,所提出的度量可以有效地处理对称和不对称扭曲。实验结果表明,与现有的度量标准相比,所提出的度量标准与人类感知的相关性更好。
{"title":"No-reference perceptual blur metric for stereoscopic images","authors":"Sid Ahmed Fezza, M. Larabi","doi":"10.1109/IC3D.2014.7032601","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032601","url":null,"abstract":"In this paper, we propose a no-reference perceptual blur metric for 3D stereoscopic images. The proposed approach relies on computing perceptual local blurriness map for each image of the stereo pair. To take into account the disparity/depth masking effect, we modulate the obtained perceptual score at each position of the blurriness maps according to its location in the scene. Under the assumption that, in case of asymmetric stereoscopic image quality, 3D perception mechanisms place more emphasis on the view providing the most important and contrasted information, the two derived local blurriness maps are combined using weighting factors based on local information content. Thanks to the inclusion of those psychophysical findings, the proposed metric handles efficiently symmetric as well as asymmetric distortions. Experimental results show that the proposed metric correlates better with human perception than state-of-the-art metrics.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"39 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132747303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Camera oscillation pattern for VSLAM: Translational versus rotational VSLAM的相机振荡模式:平移与旋转
Pub Date : 2014-12-09 DOI: 10.1109/IC3D.2014.7032598
M. Heshmat, M. Abdellatif, Kazuaki Nakamura, A. Abouelsoud, N. Babaguchi
Visual SLAM algorithms exploit natural scene features to infer the camera motion and build a map of the environment landmarks. SLAM algorithm has two interrelated processes localization and mapping. For accurate localization, we need the features location estimates to converge quickly. On the other hand, to build an accurate map, we need accurate localization. Recently, a biologically inspired approach exploits deliberate camera oscillation has been used to improve the convergence speed of depth estimate. In this paper, we explore the effect of camera oscillation pattern on the accuracy of VSLAM. Two main oscillation patterns are used for distance estimation: translational and rotational. Experiments, using static and moving robot, are made to explore the effect of these oscillation patterns on the VSLAM performance.
视觉SLAM算法利用自然场景特征来推断相机运动并构建环境地标地图。SLAM算法有两个相互关联的过程:定位和映射。为了精确定位,我们需要特征位置估计快速收敛。另一方面,为了绘制精确的地图,我们需要精确的定位。最近,一种受生物学启发的方法利用故意的相机振荡来提高深度估计的收敛速度。本文探讨了相机振荡模式对VSLAM精度的影响。两种主要的振荡模式用于距离估计:平移和旋转。通过静态和运动机器人实验,探讨了这些振荡模式对VSLAM性能的影响。
{"title":"Camera oscillation pattern for VSLAM: Translational versus rotational","authors":"M. Heshmat, M. Abdellatif, Kazuaki Nakamura, A. Abouelsoud, N. Babaguchi","doi":"10.1109/IC3D.2014.7032598","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032598","url":null,"abstract":"Visual SLAM algorithms exploit natural scene features to infer the camera motion and build a map of the environment landmarks. SLAM algorithm has two interrelated processes localization and mapping. For accurate localization, we need the features location estimates to converge quickly. On the other hand, to build an accurate map, we need accurate localization. Recently, a biologically inspired approach exploits deliberate camera oscillation has been used to improve the convergence speed of depth estimate. In this paper, we explore the effect of camera oscillation pattern on the accuracy of VSLAM. Two main oscillation patterns are used for distance estimation: translational and rotational. Experiments, using static and moving robot, are made to explore the effect of these oscillation patterns on the VSLAM performance.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126782394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual attention modeling for 3D video using neural networks 基于神经网络的三维视频视觉注意建模
Pub Date : 2014-12-09 DOI: 10.1109/IC3D.2014.7032602
Iana Iatsun, M. Larabi, C. Fernandez-Maloigne
Visual attention is one of the most important mechanisms in the human visual perception. Recently, its modeling becomes a principal requirement for the optimization of the image processing systems. Numerous algorithms have already been designed for 2D saliency prediction. However, only few works can be found for 3D content. In this study, we propose a saliency model for stereoscopic 3D video. This algorithm extracts information from three dimensions of content, i.e. spatial, temporal and depth. This model benefits from the properties of interest points to be close to human fixations in order to build spatial salient features. Besides, as the perception of depth relies strongly on monocular cues, our model extracts the depth salient features using the pictorial depth sources. Since weights for fusion strategy are often selected in ad-hoc manner, in this work, we suggest to use a machine learning approach. The used artificial Neural Network allows to define adaptive weights based on the eye-tracking data. The results of the proposed algorithm are tested versus ground-truth information using the state-of-the-art techniques.
视觉注意是人类视觉感知中最重要的机制之一。近年来,其建模已成为图像处理系统优化的主要要求。许多算法已经被设计用于二维显著性预测。然而,只有很少的作品可以找到3D内容。在这项研究中,我们提出了一个立体3D视频的显著性模型。该算法从内容的空间、时间和深度三个维度提取信息。该模型受益于兴趣点的属性,接近人类的注视点,以建立空间显著特征。此外,由于深度感知强烈依赖于单目线索,我们的模型使用图像深度源提取深度显著特征。由于融合策略的权重通常以特别的方式选择,在这项工作中,我们建议使用机器学习方法。所使用的人工神经网络允许基于眼动追踪数据定义自适应权重。使用最先进的技术对所提出的算法的结果与真实信息进行了测试。
{"title":"Visual attention modeling for 3D video using neural networks","authors":"Iana Iatsun, M. Larabi, C. Fernandez-Maloigne","doi":"10.1109/IC3D.2014.7032602","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032602","url":null,"abstract":"Visual attention is one of the most important mechanisms in the human visual perception. Recently, its modeling becomes a principal requirement for the optimization of the image processing systems. Numerous algorithms have already been designed for 2D saliency prediction. However, only few works can be found for 3D content. In this study, we propose a saliency model for stereoscopic 3D video. This algorithm extracts information from three dimensions of content, i.e. spatial, temporal and depth. This model benefits from the properties of interest points to be close to human fixations in order to build spatial salient features. Besides, as the perception of depth relies strongly on monocular cues, our model extracts the depth salient features using the pictorial depth sources. Since weights for fusion strategy are often selected in ad-hoc manner, in this work, we suggest to use a machine learning approach. The used artificial Neural Network allows to define adaptive weights based on the eye-tracking data. The results of the proposed algorithm are tested versus ground-truth information using the state-of-the-art techniques.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"113 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114010005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic stereoscopic previz 动态立体预览
Pub Date : 2014-12-09 DOI: 10.1109/IC3D.2014.7032600
S. Pujades, Laurent Boiron, Rémi Ronfard, Frederic Devernay
The pre-production stage in a film workflow is important to save time during production. To be useful in stereoscopic 3-D movie-making, storyboards and previz tools need to be adapted in at least two ways. First, it should be possible to specify the desired depth values with suitable and intuitive user interfaces. Second, it should be possible to preview the stereoscopic movie with a suitable screen size. In this paper, we describe a novel technique for simulating a cinema projection room with arbitrary dimensions in a realtime game engine, while controling the camera interaxial and convergence parameters with a gamepad controller. Our technique has been implemented in the Blender Game Engine and tested during the shooting of a short movie. Qualitative experimental results show that our technique overcomes the limitations of previous work in stereoscopic previz and can usefully complement traditional storyboards during pre-production of stereoscopic 3-D movies.
电影工作流程中的预制作阶段对于节省制作过程中的时间非常重要。为了在立体3d电影制作中发挥作用,故事板和预览工具至少需要在两个方面进行调整。首先,它应该能够通过合适且直观的用户界面指定所需的深度值。其次,它应该能够预览立体电影与一个合适的屏幕尺寸。在本文中,我们描述了一种在实时游戏引擎中模拟任意尺寸的电影放映室的新技术,同时用手柄控制器控制摄像机的轴向和收敛参数。我们的技术已经在Blender游戏引擎中实现,并在拍摄短片期间进行了测试。定性实验结果表明,我们的技术克服了以往立体预视工作的局限性,可以在立体3d电影的前期制作中有效地补充传统的故事板。
{"title":"Dynamic stereoscopic previz","authors":"S. Pujades, Laurent Boiron, Rémi Ronfard, Frederic Devernay","doi":"10.1109/IC3D.2014.7032600","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032600","url":null,"abstract":"The pre-production stage in a film workflow is important to save time during production. To be useful in stereoscopic 3-D movie-making, storyboards and previz tools need to be adapted in at least two ways. First, it should be possible to specify the desired depth values with suitable and intuitive user interfaces. Second, it should be possible to preview the stereoscopic movie with a suitable screen size. In this paper, we describe a novel technique for simulating a cinema projection room with arbitrary dimensions in a realtime game engine, while controling the camera interaxial and convergence parameters with a gamepad controller. Our technique has been implemented in the Blender Game Engine and tested during the shooting of a short movie. Qualitative experimental results show that our technique overcomes the limitations of previous work in stereoscopic previz and can usefully complement traditional storyboards during pre-production of stereoscopic 3-D movies.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131120736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamic feature detection using virtual correction and camera oscillations 动态特征检测使用虚拟校正和相机振荡
Pub Date : 2014-12-09 DOI: 10.1109/IC3D.2014.7032584
M. Heshmat, M. Abdellatif, Kazuaki Nakamura, A. Abouelsoud, N. Babaguchi
Visual SLAM algorithms exploit natural scene features to infer the camera motion and build a map of a static environment. In this paper, we relax the severe assumption of a static scene to allow for the detection and deletion of dynamic points. A new "virtual correction" method is introduced which serves to detect the dynamic points by checking the re-projection error of the points before and after the virtual measurement update. It can also recover the erroneously excluded useful features, particularly the distant points which may be deleted because of the change in its position after new measurement observation. Deliberate camera oscillations are also used to improve the VSLAM accuracy and the camera observability. The simulation results showed the effectiveness of the virtual correction when combined with camera oscillation in recovering the misclassified features and detecting the dynamic features even in difficult scenarios.
视觉SLAM算法利用自然场景特征来推断相机运动并构建静态环境的地图。在本文中,我们放宽了对静态场景的严格假设,以允许检测和删除动态点。提出了一种新的“虚拟校正”方法,通过检测虚拟测量更新前后点的重投影误差来检测动态点。它还可以恢复错误排除的有用特征,特别是在新的测量观测后由于位置变化而可能被删除的远处点。为了提高VSLAM的精度和相机的可观测性,还采用了有意的相机振荡。仿真结果表明,在复杂场景下,结合摄像机振荡的虚拟校正在恢复误分类特征和检测动态特征方面是有效的。
{"title":"Dynamic feature detection using virtual correction and camera oscillations","authors":"M. Heshmat, M. Abdellatif, Kazuaki Nakamura, A. Abouelsoud, N. Babaguchi","doi":"10.1109/IC3D.2014.7032584","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032584","url":null,"abstract":"Visual SLAM algorithms exploit natural scene features to infer the camera motion and build a map of a static environment. In this paper, we relax the severe assumption of a static scene to allow for the detection and deletion of dynamic points. A new \"virtual correction\" method is introduced which serves to detect the dynamic points by checking the re-projection error of the points before and after the virtual measurement update. It can also recover the erroneously excluded useful features, particularly the distant points which may be deleted because of the change in its position after new measurement observation. Deliberate camera oscillations are also used to improve the VSLAM accuracy and the camera observability. The simulation results showed the effectiveness of the virtual correction when combined with camera oscillation in recovering the misclassified features and detecting the dynamic features even in difficult scenarios.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125607046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A subjective evaluation of true 3D images 对真实3D图像的主观评价
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032603
R. R. Tamboli, K. Vupparaboina, Jayanth Reddy Regatti, S. Jana, Sumohana S. Channappayya
We present the results of the first-ever subjective evaluation of true 3D images performed on a light field display. Given the ever-increasing volume of true 3D image content being created and consumed, it is imperative to construct a systematic framework for the subjective evaluation of such content. We first describe our experimental setup and propose a methodology for subjective evaluation on the setup. We then describe the dataset used for our study. Subjective evaluation results are reported for 20 subjects. In addition to subjective results, we also report results of popular full-reference objective 2D image quality assessment methods applied on a per view basis.
我们提出了在光场显示器上进行的首次真实3D图像主观评估的结果。随着真正的3D图像内容被创造和消费的数量不断增加,构建一个对这些内容进行主观评价的系统框架势在必行。我们首先描述了我们的实验设置,并提出了一种主观评价设置的方法。然后,我们描述了用于我们研究的数据集。报告了20名受试者的主观评价结果。除了主观结果,我们还报告了流行的全参考客观二维图像质量评估方法在每视图基础上的应用结果。
{"title":"A subjective evaluation of true 3D images","authors":"R. R. Tamboli, K. Vupparaboina, Jayanth Reddy Regatti, S. Jana, Sumohana S. Channappayya","doi":"10.1109/IC3D.2014.7032603","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032603","url":null,"abstract":"We present the results of the first-ever subjective evaluation of true 3D images performed on a light field display. Given the ever-increasing volume of true 3D image content being created and consumed, it is imperative to construct a systematic framework for the subjective evaluation of such content. We first describe our experimental setup and propose a methodology for subjective evaluation on the setup. We then describe the dataset used for our study. Subjective evaluation results are reported for 20 subjects. In addition to subjective results, we also report results of popular full-reference objective 2D image quality assessment methods applied on a per view basis.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123606318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
3D models over the centuries: From old floor plans to 3D representation 几个世纪以来的3D模型:从旧平面图到3D表示
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032583
C. Riedinger, M. Jordan, Hedi Tabia
This paper presents a set of algorithms dedicated to the 3D modeling of historical buildings from a collection of old architecture plans, including floor plans, elevations and cutoffs. Image processing algorithms help to detect and localize main structures of the building from the floor plans (thick and thin walls, openings). The extrusion of the walls allow us to build a first 3D model. We compute height informations and add textures to the model by analyzing the elevation images from the same collection of documents. We applied this pipeline to XVIIIth century plans of the Château de Versailles, and show results for two different parts of the Château.
本文提出了一套算法,用于从旧建筑平面图集合中对历史建筑进行三维建模,包括平面图、立面图和切线图。图像处理算法有助于从平面图(厚墙和薄墙、开口)中检测和定位建筑的主要结构。墙壁的挤压使我们能够建立第一个3D模型。我们通过分析来自同一文档集合的高程图像来计算高度信息并为模型添加纹理。我们将这条管道应用于18世纪的凡尔赛城堡平面图,并展示了城堡两个不同部分的结果。
{"title":"3D models over the centuries: From old floor plans to 3D representation","authors":"C. Riedinger, M. Jordan, Hedi Tabia","doi":"10.1109/IC3D.2014.7032583","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032583","url":null,"abstract":"This paper presents a set of algorithms dedicated to the 3D modeling of historical buildings from a collection of old architecture plans, including floor plans, elevations and cutoffs. Image processing algorithms help to detect and localize main structures of the building from the floor plans (thick and thin walls, openings). The extrusion of the walls allow us to build a first 3D model. We compute height informations and add textures to the model by analyzing the elevation images from the same collection of documents. We applied this pipeline to XVIIIth century plans of the Château de Versailles, and show results for two different parts of the Château.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133306864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Turning a ToF camera into an illumination tester: Multichannel waveform recovery from few measurements using compressed sensing 将ToF相机转换为照明测试仪:使用压缩传感从少量测量中恢复多通道波形
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032582
Miguel Heredia Conde, K. Hartmann, O. Loffeld
A critical element of any Time-of-Flight (ToF) 3D imaging system is the illumination. Most commercial solutions are restricted to short range indoor operation and use simple illumination setups of single or few LEDs, grouped together. Recent developments towards medium and long range ToF imaging, ready for outdoor operation, bring the need for powerful illumination setups, constituted by many emitters, which might be grouped in distributed modules. Provided that the depth accuracy of ToF cameras strongly depends on the quality of the illumination waveform, assuring that a complex illumination system is providing a homogeneous in-phase wavefront is of capital importance to minimize systematic inaccuracies. In this work we present a novel framework for multichannel simultaneous testing of illumination waveforms, which is able to recover the waveform of the incident light on each pixel of a ToF camera, exploiting the sparsity of typical continuous wave (CW) illumination signals in frequency domain.
任何飞行时间(ToF) 3D成像系统的一个关键因素是照明。大多数商业解决方案仅限于短距离室内操作,并使用单个或几个led组合在一起的简单照明设置。最近发展的中远程ToF成像,为室外操作做好准备,带来了强大的照明设置的需求,由许多发射器组成,这些发射器可能分组在分布式模块中。鉴于ToF相机的深度精度在很大程度上取决于照明波形的质量,因此确保复杂照明系统提供均匀的同相波前对于最小化系统不准确性至关重要。在这项工作中,我们提出了一种用于多通道同时测试照明波形的新框架,该框架能够利用典型连续波(CW)照明信号在频域的稀疏性,恢复ToF相机每个像素上入射光的波形。
{"title":"Turning a ToF camera into an illumination tester: Multichannel waveform recovery from few measurements using compressed sensing","authors":"Miguel Heredia Conde, K. Hartmann, O. Loffeld","doi":"10.1109/IC3D.2014.7032582","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032582","url":null,"abstract":"A critical element of any Time-of-Flight (ToF) 3D imaging system is the illumination. Most commercial solutions are restricted to short range indoor operation and use simple illumination setups of single or few LEDs, grouped together. Recent developments towards medium and long range ToF imaging, ready for outdoor operation, bring the need for powerful illumination setups, constituted by many emitters, which might be grouped in distributed modules. Provided that the depth accuracy of ToF cameras strongly depends on the quality of the illumination waveform, assuring that a complex illumination system is providing a homogeneous in-phase wavefront is of capital importance to minimize systematic inaccuracies. In this work we present a novel framework for multichannel simultaneous testing of illumination waveforms, which is able to recover the waveform of the incident light on each pixel of a ToF camera, exploiting the sparsity of typical continuous wave (CW) illumination signals in frequency domain.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iterative refinement for real-time local stereo matching 实时局部立体匹配的迭代细化
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032581
Maarten Dumont, Patrik Goorts, S. Maesen, Donald Degraen, P. Bekaert, G. Lafruit
We present a novel iterative refinement process to apply to any stereo matching algorithm. The quality of its disparity map output is increased using four rigorously defined refinement modules, which can be iterated multiple times: a disparity cross check, bitwise fast voting, invalid disparity handling, and median filtering. We apply our refinement process to our recently developed aggregation window method for stereo matching that combines two adaptive windows per pixel region [2]; one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that closely follows all object edges and thereby achieves increased hypothesis confidence. We demonstrate that the iterative disparity refinement has a large effect on the overall quality, especially around occluded areas, and tends to converge to a final solution. We perform a quantitative evaluation on various Middlebury datasets. Our whole disparity estimation process supports efficient GPU implementation to facilitate scalability and real-time performance.
我们提出了一种新的迭代细化过程,适用于任何立体匹配算法。使用四个严格定义的细化模块来提高其视差映射输出的质量,这些模块可以多次迭代:视差交叉检查、按位快速投票、无效视差处理和中值过滤。我们将我们的改进过程应用于我们最近开发的立体匹配聚合窗口方法,该方法结合了每个像素区域的两个自适应窗口[2];一个沿着图像中的水平边缘,另一个沿着垂直边缘。它们的组合定义了最终的聚集窗口形状,该形状紧跟所有对象的边缘,从而实现了更高的假设置信度。我们证明了迭代视差细化对整体质量有很大的影响,特别是在遮挡区域周围,并且倾向于收敛到最终解。我们对各种米德尔伯里数据集进行了定量评估。我们的整个视差估计过程支持高效的GPU实现,以促进可扩展性和实时性能。
{"title":"Iterative refinement for real-time local stereo matching","authors":"Maarten Dumont, Patrik Goorts, S. Maesen, Donald Degraen, P. Bekaert, G. Lafruit","doi":"10.1109/IC3D.2014.7032581","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032581","url":null,"abstract":"We present a novel iterative refinement process to apply to any stereo matching algorithm. The quality of its disparity map output is increased using four rigorously defined refinement modules, which can be iterated multiple times: a disparity cross check, bitwise fast voting, invalid disparity handling, and median filtering. We apply our refinement process to our recently developed aggregation window method for stereo matching that combines two adaptive windows per pixel region [2]; one following the horizontal edges in the image, the other the vertical edges. Their combination defines the final aggregation window shape that closely follows all object edges and thereby achieves increased hypothesis confidence. We demonstrate that the iterative disparity refinement has a large effect on the overall quality, especially around occluded areas, and tends to converge to a final solution. We perform a quantitative evaluation on various Middlebury datasets. Our whole disparity estimation process supports efficient GPU implementation to facilitate scalability and real-time performance.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"12 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120936329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Row-interleaved sampling for stereoscopic video coding targeting polarized displays 针对极化显示器的立体视频编码行交错采样
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032580
P. Aflaki, Maryam Homayouni, M. Hannuksela, M. Gabbouj
In this paper, a coding scheme targeting stereoscopic content for polarized displays is introduced. It is proposed to use row-interleaved sampling of the views. Asymmetry is achieved by selection of odd/even rows for different views based on the format they will be shown on a polarized display. Coding performance of several different multiview coding schemes with inter-view prediction was analyzed and compared with the anchor case where there is no downsampling applied to the input content. The objective results show that the proposed row-interleaved sampling scheme outperforms all other schemes.
本文介绍了一种针对极化显示器立体内容的编码方案。提出了对视图进行行交错采样的方法。不对称是通过选择奇数/偶数行来实现的,以不同的视图为基础,它们将显示在极化显示器上的格式。分析了几种带间视预测的多视点编码方案的编码性能,并与不对输入内容进行下采样的锚点情况进行了比较。客观结果表明,所提出的行交错采样方案优于所有其他方案。
{"title":"Row-interleaved sampling for stereoscopic video coding targeting polarized displays","authors":"P. Aflaki, Maryam Homayouni, M. Hannuksela, M. Gabbouj","doi":"10.1109/IC3D.2014.7032580","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032580","url":null,"abstract":"In this paper, a coding scheme targeting stereoscopic content for polarized displays is introduced. It is proposed to use row-interleaved sampling of the views. Asymmetry is achieved by selection of odd/even rows for different views based on the format they will be shown on a polarized display. Coding performance of several different multiview coding schemes with inter-view prediction was analyzed and compared with the anchor case where there is no downsampling applied to the input content. The objective results show that the proposed row-interleaved sampling scheme outperforms all other schemes.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125663297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2014 International Conference on 3D Imaging (IC3D)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1