首页 > 最新文献

2014 International Conference on 3D Imaging (IC3D)最新文献

英文 中文
Row-interleaved sampling for stereoscopic video coding targeting polarized displays 针对极化显示器的立体视频编码行交错采样
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032580
P. Aflaki, Maryam Homayouni, M. Hannuksela, M. Gabbouj
In this paper, a coding scheme targeting stereoscopic content for polarized displays is introduced. It is proposed to use row-interleaved sampling of the views. Asymmetry is achieved by selection of odd/even rows for different views based on the format they will be shown on a polarized display. Coding performance of several different multiview coding schemes with inter-view prediction was analyzed and compared with the anchor case where there is no downsampling applied to the input content. The objective results show that the proposed row-interleaved sampling scheme outperforms all other schemes.
本文介绍了一种针对极化显示器立体内容的编码方案。提出了对视图进行行交错采样的方法。不对称是通过选择奇数/偶数行来实现的,以不同的视图为基础,它们将显示在极化显示器上的格式。分析了几种带间视预测的多视点编码方案的编码性能,并与不对输入内容进行下采样的锚点情况进行了比较。客观结果表明,所提出的行交错采样方案优于所有其他方案。
{"title":"Row-interleaved sampling for stereoscopic video coding targeting polarized displays","authors":"P. Aflaki, Maryam Homayouni, M. Hannuksela, M. Gabbouj","doi":"10.1109/IC3D.2014.7032580","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032580","url":null,"abstract":"In this paper, a coding scheme targeting stereoscopic content for polarized displays is introduced. It is proposed to use row-interleaved sampling of the views. Asymmetry is achieved by selection of odd/even rows for different views based on the format they will be shown on a polarized display. Coding performance of several different multiview coding schemes with inter-view prediction was analyzed and compared with the anchor case where there is no downsampling applied to the input content. The objective results show that the proposed row-interleaved sampling scheme outperforms all other schemes.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125663297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
No-reference quality assessment of 3D videos based on human visual perception 基于人类视觉感知的3D视频无参考质量评估
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032585
M. Hasan, J. Arnold, M. Frater
Broadcasting of high definition stereoscopic 3D videos is growing rapidly because of greater demand in the mass consumer market. In spite of increasing consumer interest, poor quality, crosstalk or side effects and visual quality degradation due to packet loss during transmission has hampered the advancement of 3D visualization. The quality assessment of distorted 3D video is a crucial element in designing and arranging advanced immersive media distribution platforms. A widely accepted no-reference quality metric to measure 3D video considering the human visual system (HVS) is yet to be developed. In this paper we have proposed a quality assessment (QA) criterion that can be measured without the original video. At first, we proposed a disparity index, that is measured by region based similarity matching and then edge magnitude difference is detected for visually significant areas of the image. Finally, an assessment metric is generated to measure the 3D videos focusing on human perception. Experimental analysis with common video datasets and comparison with different algorithms shows the efficiency of the proposed algorithm for 3D stereoscopic videos in terms of perceptual characteristics.
由于大众消费市场的需求增加,高清晰度立体3D视频的广播正在迅速增长。尽管消费者越来越感兴趣,但由于传输过程中丢包导致的质量差、串扰或副作用以及视觉质量下降阻碍了3D可视化的发展。失真三维视频的质量评估是设计和部署先进沉浸式媒体分发平台的关键因素。考虑到人类视觉系统(HVS),目前还没有一个被广泛接受的无参考质量度量来衡量3D视频。本文提出了一种无需原始视频即可测量的质量评价(QA)准则。首先,我们提出了一个视差指数,通过基于区域的相似性匹配来测量视差指数,然后检测图像中视觉显著区域的边缘幅度差。最后,生成了一个评估指标来衡量以人类感知为中心的3D视频。通过对常用视频数据集的实验分析和与不同算法的比较,表明了该算法在三维立体视频的感知特征方面的有效性。
{"title":"No-reference quality assessment of 3D videos based on human visual perception","authors":"M. Hasan, J. Arnold, M. Frater","doi":"10.1109/IC3D.2014.7032585","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032585","url":null,"abstract":"Broadcasting of high definition stereoscopic 3D videos is growing rapidly because of greater demand in the mass consumer market. In spite of increasing consumer interest, poor quality, crosstalk or side effects and visual quality degradation due to packet loss during transmission has hampered the advancement of 3D visualization. The quality assessment of distorted 3D video is a crucial element in designing and arranging advanced immersive media distribution platforms. A widely accepted no-reference quality metric to measure 3D video considering the human visual system (HVS) is yet to be developed. In this paper we have proposed a quality assessment (QA) criterion that can be measured without the original video. At first, we proposed a disparity index, that is measured by region based similarity matching and then edge magnitude difference is detected for visually significant areas of the image. Finally, an assessment metric is generated to measure the 3D videos focusing on human perception. Experimental analysis with common video datasets and comparison with different algorithms shows the efficiency of the proposed algorithm for 3D stereoscopic videos in terms of perceptual characteristics.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126605915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Texture analysis using 3D Gabor features and 3D MPEG-7 Edge Histogram descriptor in fluorescence microscopy 荧光显微镜中使用三维Gabor特征和三维MPEG-7边缘直方图描述子的纹理分析
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032576
Tomás Majtner, D. Svoboda
The recognition of patterns with focus on texture and shape analysis is still very hot topic, especially in biomédical image processing. In this article, we introduce 3D extensions of well-known approaches for this particular area. We focus on the collection of MPEG-7 image descriptors, specifically on the Edge Histogram Descriptor (EHD) and Gabor features, which are the core of the Homogeneous Texture Descriptor (HTD). The proposed extensions are evaluated on the dataset consisting of three classes of 3D volumetric biomédical images. Two different classifiers, namely k-NN and Multi-Class SVM, are used to evaluate the proposed algorithms. According to the presented tests, the proposed 3D extensions clearly outperform their 2D equivalents in the classification tasks.
以纹理和形状分析为重点的图案识别仍然是一个非常热门的话题,特别是在生物医学图像处理中。在本文中,我们将介绍针对这一特定领域的知名方法的3D扩展。我们重点研究了MPEG-7图像描述符的集合,特别是边缘直方图描述符(EHD)和Gabor特征,它们是均匀纹理描述符(HTD)的核心。在由三类三维体积生物医学图像组成的数据集上评估了所提出的扩展。两种不同的分类器,即k-NN和多类支持向量机,被用来评估所提出的算法。根据所提出的测试,所提出的3D扩展在分类任务中明显优于2D扩展。
{"title":"Texture analysis using 3D Gabor features and 3D MPEG-7 Edge Histogram descriptor in fluorescence microscopy","authors":"Tomás Majtner, D. Svoboda","doi":"10.1109/IC3D.2014.7032576","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032576","url":null,"abstract":"The recognition of patterns with focus on texture and shape analysis is still very hot topic, especially in biomédical image processing. In this article, we introduce 3D extensions of well-known approaches for this particular area. We focus on the collection of MPEG-7 image descriptors, specifically on the Edge Histogram Descriptor (EHD) and Gabor features, which are the core of the Homogeneous Texture Descriptor (HTD). The proposed extensions are evaluated on the dataset consisting of three classes of 3D volumetric biomédical images. Two different classifiers, namely k-NN and Multi-Class SVM, are used to evaluate the proposed algorithms. According to the presented tests, the proposed 3D extensions clearly outperform their 2D equivalents in the classification tasks.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117013210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic analysis of sharpness mismatch between stereoscopic views for stereo 3D videos 立体3D视频立体视图之间清晰度不匹配的自动分析
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032572
Mohan Liu, K. Müller
This paper presents an efficient approach to measure sharpness mismatches between stereoscopic views. Sharpness mismatch can occur through focus mismatches between stereoscopic cameras, errors in post-processing or even for low-bandwidth transmission, where one view is subsampled or transmitted at a much lower rate. This artifact can lead to a degraded 3D experience for observers. In this paper, the sharpness mismatch score is estimated by measuring the width deviations of edge pairs in each valid depth plane. The mismatch probability is then calculated considering the perceptibility of edge width deviations. In the experiments, Gaussian low-pass filters were used to generate global sharpness mismatches between stereoscopic views since defocus-based effects of lens aberrations can be modeled as Gaussian blur. Thus, the global sharpness distortions simulate the focus mismatch of stereo cameras. The disparity maps of test videos were automatically generated and corrected. In addition, original high-quality disparity maps of the test datasets were used as benchmarks. According to the experimental results, we show that the proposed approach performs well on measuring sharpness mismatch between stereoscopic views by comparison with some state-of-the-art metrics.
本文提出了一种测量立体视图间清晰度不匹配的有效方法。清晰度不匹配可能发生在立体相机之间的焦点不匹配,后处理错误,甚至是低带宽传输,其中一个视图被次采样或以低得多的速率传输。这个工件可能会导致观察者的3D体验下降。本文通过测量每个有效深度平面上边缘对的宽度偏差来估计锐度失配分数。然后考虑边缘宽度偏差的可感知性,计算不匹配概率。在实验中,由于透镜像差的散焦效应可以建模为高斯模糊,因此使用高斯低通滤波器来产生立体视图之间的全局清晰度不匹配。因此,全局锐度畸变模拟了立体相机的焦点失配。自动生成并校正测试视频的视差图。此外,使用测试数据集的原始高质量视差图作为基准。实验结果表明,该方法可以较好地测量立体视图之间的锐度不匹配。
{"title":"Automatic analysis of sharpness mismatch between stereoscopic views for stereo 3D videos","authors":"Mohan Liu, K. Müller","doi":"10.1109/IC3D.2014.7032572","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032572","url":null,"abstract":"This paper presents an efficient approach to measure sharpness mismatches between stereoscopic views. Sharpness mismatch can occur through focus mismatches between stereoscopic cameras, errors in post-processing or even for low-bandwidth transmission, where one view is subsampled or transmitted at a much lower rate. This artifact can lead to a degraded 3D experience for observers. In this paper, the sharpness mismatch score is estimated by measuring the width deviations of edge pairs in each valid depth plane. The mismatch probability is then calculated considering the perceptibility of edge width deviations. In the experiments, Gaussian low-pass filters were used to generate global sharpness mismatches between stereoscopic views since defocus-based effects of lens aberrations can be modeled as Gaussian blur. Thus, the global sharpness distortions simulate the focus mismatch of stereo cameras. The disparity maps of test videos were automatically generated and corrected. In addition, original high-quality disparity maps of the test datasets were used as benchmarks. According to the experimental results, we show that the proposed approach performs well on measuring sharpness mismatch between stereoscopic views by comparison with some state-of-the-art metrics.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123276501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Evaluation of pairwise calibration techniques for range cameras and their ability to detect a misalignment 对测距相机的两两校准技术及其检测不对准能力的评估
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032596
A. Lejeune, David Grogna, Marc Van Droogenbroeck, J. Verly
Many applications require the use of multiple cameras to cover a large volume. In this paper, we evaluate several pairwise calibration techniques dedicated to multiple range cameras. We compare the precision of a self-calibration technique based on the movement in front of the cameras to object based calibration. While the self-calibration technique is less precise than its counterparts, it yields a first estimation of the transformation between the cameras and permits to detect when the cameras become mis-aligned. Therefore, this technique is useful in a practical situations.
许多应用需要使用多个摄像头来覆盖大体积。在本文中,我们评估了几种专用于多距离相机的两两校准技术。我们比较了基于相机前运动的自校准技术和基于物体的校准技术的精度。虽然自校准技术不如同类技术精确,但它可以对相机之间的转换进行初步估计,并允许检测相机何时出现错位。因此,该技术在实际情况下非常有用。
{"title":"Evaluation of pairwise calibration techniques for range cameras and their ability to detect a misalignment","authors":"A. Lejeune, David Grogna, Marc Van Droogenbroeck, J. Verly","doi":"10.1109/IC3D.2014.7032596","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032596","url":null,"abstract":"Many applications require the use of multiple cameras to cover a large volume. In this paper, we evaluate several pairwise calibration techniques dedicated to multiple range cameras. We compare the precision of a self-calibration technique based on the movement in front of the cameras to object based calibration. While the self-calibration technique is less precise than its counterparts, it yields a first estimation of the transformation between the cameras and permits to detect when the cameras become mis-aligned. Therefore, this technique is useful in a practical situations.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134100595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computation of microimages for plenoptic display 全光显示显微图像的计算
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032579
A. Dorado, G. Saavedra, Seokmin Hong, M. Martínez-Corral
We report a new algorithm for the generation of the microimages ready for their projection into an integral imaging monitor. The algorithm is based in the transformation properties of the plenoptic field captured with an array of digital cameras. We show that a small number of cameras can produce the microimages for displaying 3D scenes with resolution and parallax fully adapted to the monitor features.
我们报告了一种新的算法,用于生成微图像,准备将其投影到集成成像监视器中。该算法基于数码相机阵列捕捉的全光场的变换特性。我们证明了少量摄像机可以产生用于显示3D场景的微图像,其分辨率和视差完全适应显示器的功能。
{"title":"Computation of microimages for plenoptic display","authors":"A. Dorado, G. Saavedra, Seokmin Hong, M. Martínez-Corral","doi":"10.1109/IC3D.2014.7032579","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032579","url":null,"abstract":"We report a new algorithm for the generation of the microimages ready for their projection into an integral imaging monitor. The algorithm is based in the transformation properties of the plenoptic field captured with an array of digital cameras. We show that a small number of cameras can produce the microimages for displaying 3D scenes with resolution and parallax fully adapted to the monitor features.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121905112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A physically motivated pixel-based model for background subtraction in 3D images 一种基于物理动机的基于像素的3D图像背景减法模型
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032591
Marc Braham, A. Lejeune, Marc Van Droogenbroeck
This paper proposes a new pixel-based background subtraction technique, applicable to range images, to detect motion. Our method exploits the physical meaning of depth information, which leads to an improved background/foreground segmentation and the instantaneous suppression of ghosts that would appear on color images. In particular, our technique considers certain characteristics of depth measurements, such as failures for certain pixels or the non-uniformity of the spatial distribution of noise in range images, to build an improved pixel-based background model. Experiments show that incorporating specificities related to depth measurements allows us to propose a method whose performance is increased with respect to other state-of-the-art methods.
本文提出了一种新的基于像素的背景相减技术,适用于距离图像的运动检测。我们的方法利用了深度信息的物理意义,从而改进了背景/前景分割,并瞬间抑制了彩色图像上出现的鬼影。特别是,我们的技术考虑了深度测量的某些特征,例如某些像素的失效或距离图像中噪声空间分布的不均匀性,以构建改进的基于像素的背景模型。实验表明,结合与深度测量相关的特性,我们可以提出一种性能比其他最先进方法更高的方法。
{"title":"A physically motivated pixel-based model for background subtraction in 3D images","authors":"Marc Braham, A. Lejeune, Marc Van Droogenbroeck","doi":"10.1109/IC3D.2014.7032591","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032591","url":null,"abstract":"This paper proposes a new pixel-based background subtraction technique, applicable to range images, to detect motion. Our method exploits the physical meaning of depth information, which leads to an improved background/foreground segmentation and the instantaneous suppression of ghosts that would appear on color images. In particular, our technique considers certain characteristics of depth measurements, such as failures for certain pixels or the non-uniformity of the spatial distribution of noise in range images, to build an improved pixel-based background model. Experiments show that incorporating specificities related to depth measurements allows us to propose a method whose performance is increased with respect to other state-of-the-art methods.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130196416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Depth estimation for hand-held light field cameras under low light conditions 低光条件下手持式光场相机的深度估计
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032578
Min-Hung Chen, Ching-Fan Chiang, Yi-Chang Lu
Depth estimation is one of the new functions provided by hand-held light field cameras. However, the quality of depth estimation is very sensitive to noise, which is especially a problem for scenes under low light conditions. In this paper, we propose a depth estimation flow for light field data, which can be fully-automated and no noise characteristics are required a priori. The results of Root Mean Square Error (RMSE) and Percentage of Bad Matching Pixels (PBM) show the effectiveness of this iterative correlation-based depth estimation flow even with basic filtering functions.
深度估计是手持光场相机提供的新功能之一。然而,深度估计的质量对噪声非常敏感,这对于低光照条件下的场景来说尤其是个问题。本文提出了一种光场数据深度估计流程,该流程可以完全自动化,并且不需要先验的噪声特征。均方根误差(RMSE)和不良匹配像素百分比(PBM)的结果表明,即使使用基本滤波函数,这种基于迭代相关的深度估计流程也是有效的。
{"title":"Depth estimation for hand-held light field cameras under low light conditions","authors":"Min-Hung Chen, Ching-Fan Chiang, Yi-Chang Lu","doi":"10.1109/IC3D.2014.7032578","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032578","url":null,"abstract":"Depth estimation is one of the new functions provided by hand-held light field cameras. However, the quality of depth estimation is very sensitive to noise, which is especially a problem for scenes under low light conditions. In this paper, we propose a depth estimation flow for light field data, which can be fully-automated and no noise characteristics are required a priori. The results of Root Mean Square Error (RMSE) and Percentage of Bad Matching Pixels (PBM) show the effectiveness of this iterative correlation-based depth estimation flow even with basic filtering functions.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122957119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revised depth map estimation for multi-view stereo 改进的多视点立体深度图估计
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032588
Yao Yao, Hao Zhu, Yongming Nie, X. Ji, Xun Cao
Optical flow estimation is one of the popular methods to obtain the depth maps in multi-view stereo due to its high accuracy and robustness. In traditional optical flow estimation, the energy function contains three assumptions: intensity constancy assumption, gradient constancy assumption, and global smoothness assumption. In this work, we propose a local smoothness assumption to constrain the optical flow disparity in neighboring pixels. We first study the new smoothness term and its corresponding energy function, and present a practical iteration approach to minimize the energy function. Later we apply this new estimation method to the multi-view stereo system and obtain the depth maps of different image pairs. Our results demonstrate the good performance of the algorithm in acquiring smoothing surface when comparing to the traditional methods.
光流估计具有精度高、鲁棒性好等优点,是目前获得多视点立体深度图的常用方法之一。在传统的光流估计中,能量函数包含三个假设:强度恒定假设、梯度恒定假设和全局平滑假设。在这项工作中,我们提出了一个局部平滑假设来约束相邻像素的光流差异。首先研究了新的平滑项及其对应的能量函数,提出了一种实用的能量函数最小化迭代方法。随后,我们将这种新的估计方法应用到多视点立体系统中,得到了不同图像对的深度图。实验结果表明,与传统方法相比,该算法在获取光滑曲面方面具有良好的性能。
{"title":"Revised depth map estimation for multi-view stereo","authors":"Yao Yao, Hao Zhu, Yongming Nie, X. Ji, Xun Cao","doi":"10.1109/IC3D.2014.7032588","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032588","url":null,"abstract":"Optical flow estimation is one of the popular methods to obtain the depth maps in multi-view stereo due to its high accuracy and robustness. In traditional optical flow estimation, the energy function contains three assumptions: intensity constancy assumption, gradient constancy assumption, and global smoothness assumption. In this work, we propose a local smoothness assumption to constrain the optical flow disparity in neighboring pixels. We first study the new smoothness term and its corresponding energy function, and present a practical iteration approach to minimize the energy function. Later we apply this new estimation method to the multi-view stereo system and obtain the depth maps of different image pairs. Our results demonstrate the good performance of the algorithm in acquiring smoothing surface when comparing to the traditional methods.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115157155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cost-efficient hardware implementation of stereo image depth optimization system 经济高效的立体图像深度优化系统硬件实现
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032589
Chun-Chang Yu, Chia-Hao Cheng, Pei-Chun Lin, C. C. Chen
This paper focuses on the visual fatigue issue while viewing 3D contents. The issue is caused by the distance between the screen and the fused images. A stereo image depth optimization system with disparity map calculation, viewpoint optimization and stereo image synthesis is proposed to solve the issue with the following procedure: first, its disparity map calculation adopts the modified binary window block matching algorithm so that the complex and iterative computations can be accelerated by hardware implementation strategies including parallel color difference calculation, parallel memory banks, window shift, and pipelined architecture; second, the viewpoint optimization modifies disparities to the zone of comfort; third, stereo images are synthesized through Depth-Image-Based-Rendering (DIBR); finally, the stereo image depth optimization system is realized on the FPGA board and video files are shown via the HDMI interface. This hardware implementation turns out to be more cost-efficient to achieve high-speed performance when compared with previous works.
本文主要研究观看3D内容时的视觉疲劳问题。这个问题是由屏幕和融合图像之间的距离引起的。针对立体图像深度优化问题,提出了一种视差图计算、视点优化和立体图像合成的立体图像深度优化系统。首先,视差图计算采用改进的二进制窗口块匹配算法,通过并行色差计算、并行存储库、窗口移位和流水线架构等硬件实现策略加快了复杂的迭代计算;第二,视点优化将视差调整到舒适区;第三,通过深度图像渲染(deep - image - based rendering, DIBR)合成立体图像;最后,在FPGA板上实现了立体图像深度优化系统,并通过HDMI接口显示视频文件。与以前的工作相比,这种硬件实现在实现高速性能方面更具成本效益。
{"title":"Cost-efficient hardware implementation of stereo image depth optimization system","authors":"Chun-Chang Yu, Chia-Hao Cheng, Pei-Chun Lin, C. C. Chen","doi":"10.1109/IC3D.2014.7032589","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032589","url":null,"abstract":"This paper focuses on the visual fatigue issue while viewing 3D contents. The issue is caused by the distance between the screen and the fused images. A stereo image depth optimization system with disparity map calculation, viewpoint optimization and stereo image synthesis is proposed to solve the issue with the following procedure: first, its disparity map calculation adopts the modified binary window block matching algorithm so that the complex and iterative computations can be accelerated by hardware implementation strategies including parallel color difference calculation, parallel memory banks, window shift, and pipelined architecture; second, the viewpoint optimization modifies disparities to the zone of comfort; third, stereo images are synthesized through Depth-Image-Based-Rendering (DIBR); finally, the stereo image depth optimization system is realized on the FPGA board and video files are shown via the HDMI interface. This hardware implementation turns out to be more cost-efficient to achieve high-speed performance when compared with previous works.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122631510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 International Conference on 3D Imaging (IC3D)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1