首页 > 最新文献

2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)最新文献

英文 中文
DEPTH ESTIMATION IN LIGHT FIELD CAMERA ARRAYS BASED ON MULTI-STEREO MATCHING AND BELIEF PROPAGATION 基于多立体匹配和信念传播的光场相机阵列深度估计
Ségolène Rogge, A. Munteanu
Despite of the rich variety of depth estimation methods in the literature, computing accurate depth in multi-view camera systems remains a difficult computer vision problem. The paper proposes a novel depth estimation method for light field camera arrays. This work goes beyond existing depth estimation methods for light field cameras, being the first to employ an array of such cameras. The proposed method makes use of a multi-window and multi-scale stereo matching algorithm combined with global energy minimization based on belief propagation. The stereo-pair results are merged based on k-means clustering. The experiments demonstrate systematically improved depth estimation performance compared to the use of singular light field cameras. Additionally, the quality of the depth estimates is quasi constant at any location between the cameras, which holds great promise for the development of free navigation applications in the near future.
尽管文献中深度估计方法丰富多样,但在多视角相机系统中计算准确的深度仍然是一个计算机视觉难题。提出了一种新的光场相机阵列深度估计方法。这项工作超越了现有的光场相机深度估计方法,是第一个使用这种相机阵列的研究。该方法采用多窗口多尺度立体匹配算法,结合基于信念传播的全局能量最小化算法。基于k-means聚类对立体对结果进行合并。实验表明,与使用奇异光场相机相比,系统地提高了深度估计性能。此外,深度估计的质量在相机之间的任何位置都是准恒定的,这对于在不久的将来开发自由导航应用程序具有很大的希望。
{"title":"DEPTH ESTIMATION IN LIGHT FIELD CAMERA ARRAYS BASED ON MULTI-STEREO MATCHING AND BELIEF PROPAGATION","authors":"Ségolène Rogge, A. Munteanu","doi":"10.1109/3DTV.2018.8478503","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478503","url":null,"abstract":"Despite of the rich variety of depth estimation methods in the literature, computing accurate depth in multi-view camera systems remains a difficult computer vision problem. The paper proposes a novel depth estimation method for light field camera arrays. This work goes beyond existing depth estimation methods for light field cameras, being the first to employ an array of such cameras. The proposed method makes use of a multi-window and multi-scale stereo matching algorithm combined with global energy minimization based on belief propagation. The stereo-pair results are merged based on k-means clustering. The experiments demonstrate systematically improved depth estimation performance compared to the use of singular light field cameras. Additionally, the quality of the depth estimates is quasi constant at any location between the cameras, which holds great promise for the development of free navigation applications in the near future.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115946864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
LOCAL METHOD OF COLOR-DIFFERENCE CORRECTION BETWEEN STEREOSCOPIC-VIDEO VIEWS 立体视频视图之间色差校正的局部方法
S. Lavrushkin, Vitaliy Lyudvichenko, D. Vatolin
Many factors can cause color distortions between stereoscopic views during 3D-video shooting. Numerous viewers experience discomfort and headaches when watching stereoscopic videos that contain such distortions. In addition, 3D videos with color differences are hard to process because many algorithms assume brightness constancy.We propose an automatic method for correcting color distortions between stereoscopic views and compare it with analogs. The comparison shows that our proposed method combines high color-correction accuracy with relatively low computational complexity.
在3d视频拍摄过程中,许多因素会导致立体视图之间的颜色失真。许多观众在观看包含这种扭曲的立体视频时会感到不适和头痛。此外,由于许多算法假设亮度恒定,因此具有色差的3D视频难以处理。我们提出了一种自动校正立体视图之间颜色失真的方法,并与类似的方法进行了比较。对比表明,该方法具有较高的色彩校正精度和较低的计算复杂度。
{"title":"LOCAL METHOD OF COLOR-DIFFERENCE CORRECTION BETWEEN STEREOSCOPIC-VIDEO VIEWS","authors":"S. Lavrushkin, Vitaliy Lyudvichenko, D. Vatolin","doi":"10.1109/3DTV.2018.8478453","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478453","url":null,"abstract":"Many factors can cause color distortions between stereoscopic views during 3D-video shooting. Numerous viewers experience discomfort and headaches when watching stereoscopic videos that contain such distortions. In addition, 3D videos with color differences are hard to process because many algorithms assume brightness constancy.We propose an automatic method for correcting color distortions between stereoscopic views and compare it with analogs. The comparison shows that our proposed method combines high color-correction accuracy with relatively low computational complexity.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"66 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114059602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
LATEST RESEARCH AT THE ADVANCED DISPLAYS LABORATORY AT NTU 南洋理工大学先进显示器实验室的最新研究
P. Surman, X. Zhang, Weitao Song, Xinxing Xia, Shizheng Wang, Yuanjin Zheng
There are many basic ways of providing a glasses-free 3D display and the three methods considered most likely to succeed commercially were chosen for our current research, these are; multi-layer light field, head tracked and super multiview displays. Our multi-layer light field display enables a far smaller form factor than other types, and faster algorithms along with horizontal parallax-only will considerably speed-up computation time. A spin-off of this technology is a near-eye display that provides focus cues for maximizing user comfort. Head tracked displays use liquid crystal display panels illuminated with a directional backlight to produce multiple sets of exit pupil pairs that follow the user’s eyes under the control of a head position tracker. Our super multiview display (SMV) system uses high frame-rate projectors for spatio-temporal multiplexing that give dense viewing zones with no accommodation/convergence (A/C) conflict. Bandwidth reduction is achieved by discarding redundant information at capture. The status of the latest prototypes and their performance is described; and we conclude by indicating the future directions of our research.
提供裸眼3D显示的基本方法有很多,我们目前的研究选择了三种被认为最有可能在商业上取得成功的方法,它们是;多层光场,头部跟踪和超级多视图显示。我们的多层光场显示器可以实现比其他类型小得多的外形因素,更快的算法以及水平视差将大大加快计算时间。这项技术的衍生产品是一种近眼显示器,可以提供焦点提示,最大限度地提高用户的舒适度。头部跟踪显示器使用带有定向背光的液晶显示面板,在头部位置跟踪器的控制下,产生多组跟随用户眼睛的出口瞳孔对。我们的超级多视图显示(SMV)系统使用高帧率投影仪进行时空复用,提供密集的观看区域,没有调节/收敛(A/C)冲突。带宽减少是通过在捕获时丢弃冗余信息来实现的。介绍了最新样机的现状和性能;最后,我们指出了未来的研究方向。
{"title":"LATEST RESEARCH AT THE ADVANCED DISPLAYS LABORATORY AT NTU","authors":"P. Surman, X. Zhang, Weitao Song, Xinxing Xia, Shizheng Wang, Yuanjin Zheng","doi":"10.1109/3DTV.2018.8478440","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478440","url":null,"abstract":"There are many basic ways of providing a glasses-free 3D display and the three methods considered most likely to succeed commercially were chosen for our current research, these are; multi-layer light field, head tracked and super multiview displays. Our multi-layer light field display enables a far smaller form factor than other types, and faster algorithms along with horizontal parallax-only will considerably speed-up computation time. A spin-off of this technology is a near-eye display that provides focus cues for maximizing user comfort. Head tracked displays use liquid crystal display panels illuminated with a directional backlight to produce multiple sets of exit pupil pairs that follow the user’s eyes under the control of a head position tracker. Our super multiview display (SMV) system uses high frame-rate projectors for spatio-temporal multiplexing that give dense viewing zones with no accommodation/convergence (A/C) conflict. Bandwidth reduction is achieved by discarding redundant information at capture. The status of the latest prototypes and their performance is described; and we conclude by indicating the future directions of our research.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116501910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D OBJECTIVE QUALITY ASSESSMENT OF LIGHT FIELD VIDEO FRAMES 光场视频帧的三维客观质量评价
R. R. Tamboli, P. A. Kara, A. Cserkaszky, A. Barsi, M. Martini, Balasubramanyam Appina, Sumohana S. Channappayya, S. Jana
With the rapid advances in light field displays and cameras, research in light field content creation, visualization, coding and quality assessment is now beyond a state of emergence; it has already emerged and started attracting a significant part of the scientific community. The capability of light field displays to offer glasses-free 3D experience simultaneously for multiple users has opened new avenues in subjective and objective quality assessment of light field image content, and video is also becoming research target of such quality evaluation methods. Yet it needs to be stated that while static light field contents have evidently received relatively more attention, the research on light field video content still remains largely unexplored. In this paper, we present results of the objective quality assessment of key frames extracted from light field video content. To this end, we use our own full-reference 3D objective quality metric.
随着光场显示和相机技术的快速发展,光场内容创作、可视化、编码和质量评估等方面的研究已处于萌芽状态;它已经出现,并开始吸引科学界的很大一部分人。光场显示器能够同时为多个用户提供裸眼3D体验,为光场图像内容的主客观质量评价开辟了新的途径,视频也成为这种质量评价方法的研究对象。但需要指出的是,虽然静态光场内容显然受到了相对较多的关注,但对光场视频内容的研究仍有很大的空白。本文给出了从光场视频内容中提取关键帧的客观质量评估结果。为此,我们使用自己的全参考3D客观质量度量。
{"title":"3D OBJECTIVE QUALITY ASSESSMENT OF LIGHT FIELD VIDEO FRAMES","authors":"R. R. Tamboli, P. A. Kara, A. Cserkaszky, A. Barsi, M. Martini, Balasubramanyam Appina, Sumohana S. Channappayya, S. Jana","doi":"10.1109/3DTV.2018.8478557","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478557","url":null,"abstract":"With the rapid advances in light field displays and cameras, research in light field content creation, visualization, coding and quality assessment is now beyond a state of emergence; it has already emerged and started attracting a significant part of the scientific community. The capability of light field displays to offer glasses-free 3D experience simultaneously for multiple users has opened new avenues in subjective and objective quality assessment of light field image content, and video is also becoming research target of such quality evaluation methods. Yet it needs to be stated that while static light field contents have evidently received relatively more attention, the research on light field video content still remains largely unexplored. In this paper, we present results of the objective quality assessment of key frames extracted from light field video content. To this end, we use our own full-reference 3D objective quality metric.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
[Copyright notice] (版权)
{"title":"[Copyright notice]","authors":"","doi":"10.1109/3dtv.2018.8478547","DOIUrl":"https://doi.org/10.1109/3dtv.2018.8478547","url":null,"abstract":"","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131448211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SINGLE-SHOT DENSE RECONSTRUCTION WITH EPIC-FLOW 单次密集重建与史诗流
Qiao Chen, Charalambos (Charis) Poullis
In this paper we present a novel method for generating dense reconstructions by applying only structure-from-motion(SfM) on large-scale datasets without the need for multi-view stereo as a post-processing step. A state-of-the-art optical flow technique is used to generate dense matches. The matches are encoded such that verification for correctness becomes possible, and are stored in a database on-disk. The use of this out-of-core approach transfers the requirement for large memory space to disk, therefore allowing for the processing of even larger-scale datasets than before. We compare our approach with the state-of-the-art and present the results which verify our claims.
在本文中,我们提出了一种新的方法,该方法通过仅在大规模数据集上应用运动结构(SfM)来生成密集重建,而不需要多视图立体作为后处理步骤。最先进的光流技术用于生成密集匹配。对匹配进行编码,以便验证其正确性,并将其存储在磁盘上的数据库中。这种核外方法的使用将对大内存空间的需求转移到磁盘上,因此允许处理比以前更大规模的数据集。我们将我们的方法与最先进的方法进行比较,并提出验证我们主张的结果。
{"title":"SINGLE-SHOT DENSE RECONSTRUCTION WITH EPIC-FLOW","authors":"Qiao Chen, Charalambos (Charis) Poullis","doi":"10.1109/3DTV.2018.8478620","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478620","url":null,"abstract":"In this paper we present a novel method for generating dense reconstructions by applying only structure-from-motion(SfM) on large-scale datasets without the need for multi-view stereo as a post-processing step. A state-of-the-art optical flow technique is used to generate dense matches. The matches are encoded such that verification for correctness becomes possible, and are stored in a database on-disk. The use of this out-of-core approach transfers the requirement for large memory space to disk, therefore allowing for the processing of even larger-scale datasets than before. We compare our approach with the state-of-the-art and present the results which verify our claims.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122421258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIMULATION OF PLENOPTIC CAMERAS 全光学相机的模拟
Tim Michels, Arne Petersen, L. Palmieri, R. Koch
Plenoptic cameras enable the capturing of spatial as well as angular color information which can be used for various applications among which are image refocusing and depth calculations. However, these cameras are expensive and research in this area currently lacks data for ground truth comparisons. In this work we describe a flexible, easy-to-use Blender model for the different plenoptic camera types which is on the one hand able to provide the ground truth data for research and on the other hand allows an inexpensive assessment of the cameras usefulness for the desired applications. Furthermore we show that the rendering results exhibit the same image degradation effects as real cameras and make our simulation publicly available.
全光相机可以捕获空间和角度的颜色信息,这些信息可以用于各种应用,其中包括图像重聚焦和深度计算。然而,这些相机是昂贵的,在这一领域的研究目前缺乏地面真值比较的数据。在这项工作中,我们描述了一个灵活的,易于使用的Blender模型,用于不同的全光学相机类型,一方面能够为研究提供地面真实数据,另一方面允许对所需应用的相机有用性进行廉价评估。此外,我们证明了渲染结果显示出与真实相机相同的图像退化效果,并使我们的模拟公开可用。
{"title":"SIMULATION OF PLENOPTIC CAMERAS","authors":"Tim Michels, Arne Petersen, L. Palmieri, R. Koch","doi":"10.1109/3DTV.2018.8478432","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478432","url":null,"abstract":"Plenoptic cameras enable the capturing of spatial as well as angular color information which can be used for various applications among which are image refocusing and depth calculations. However, these cameras are expensive and research in this area currently lacks data for ground truth comparisons. In this work we describe a flexible, easy-to-use Blender model for the different plenoptic camera types which is on the one hand able to provide the ground truth data for research and on the other hand allows an inexpensive assessment of the cameras usefulness for the desired applications. Furthermore we show that the rendering results exhibit the same image degradation effects as real cameras and make our simulation publicly available.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121223123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
MATCHING LIGHT FIELD DATASETS FROM PLENOPTIC CAMERAS 1.0 AND 2.0 匹配全光相机1.0和2.0的光场数据集
Waqas Ahmad, L. Palmieri, R. Koch, Mårten Sjöström
The capturing of angular and spatial information of the scene using single camera is made possible by new emerging technology referred to as plenoptic camera. Both angular and spatial information, enable various post-processing applications, e.g. refocusing, synthetic aperture, super-resolution, and 3D scene reconstruction. In the past, multiple traditional cameras were used to capture the angular and spatial information of the scene. However, recently with the advancement in optical technology, plenoptic cameras have been introduced to capture the scene information. In a plenoptic camera, a lenslet array is placed between the main lens and the image sensor that allows multiplexing of the spatial and angular information onto a single image, also referred to as plenoptic image. The placement of the lenslet array relative to the main lens and the image sensor, results in two different optical designs of a plenoptic camera, also referred to as plenoptic 1.0 and plenoptic 2.0. In this work, we present a novel dataset captured with plenoptic 1.0 (Lytro Illum) and plenoptic 2.0 (Raytrix R29) cameras for the same scenes under the same conditions. The dataset provides the benchmark contents for various research and development activities for plenoptic images.
利用单台相机捕捉场景的角度和空间信息是一种新兴的技术,称为全光相机。角度和空间信息,支持各种后处理应用,例如重新对焦,合成光圈,超分辨率和3D场景重建。在过去,使用多个传统摄像机来捕捉场景的角度和空间信息。然而,近年来随着光学技术的进步,全光学相机已经被引入来捕捉场景信息。在全光学相机中,透镜阵列被放置在主镜头和图像传感器之间,允许空间和角度信息多路复用到单个图像上,也称为全光学图像。透镜阵列相对于主镜头和图像传感器的放置,导致了两种不同的光学设计的全光学相机,也被称为全光学1.0和全光学2.0。在这项工作中,我们提出了一个新的数据集,用plenoptic 1.0 (Lytro Illum)和plenoptic 2.0 (Raytrix R29)相机在相同条件下拍摄相同的场景。该数据集为全光图像的各种研究和开发活动提供了基准内容。
{"title":"MATCHING LIGHT FIELD DATASETS FROM PLENOPTIC CAMERAS 1.0 AND 2.0","authors":"Waqas Ahmad, L. Palmieri, R. Koch, Mårten Sjöström","doi":"10.1109/3DTV.2018.8478611","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478611","url":null,"abstract":"The capturing of angular and spatial information of the scene using single camera is made possible by new emerging technology referred to as plenoptic camera. Both angular and spatial information, enable various post-processing applications, e.g. refocusing, synthetic aperture, super-resolution, and 3D scene reconstruction. In the past, multiple traditional cameras were used to capture the angular and spatial information of the scene. However, recently with the advancement in optical technology, plenoptic cameras have been introduced to capture the scene information. In a plenoptic camera, a lenslet array is placed between the main lens and the image sensor that allows multiplexing of the spatial and angular information onto a single image, also referred to as plenoptic image. The placement of the lenslet array relative to the main lens and the image sensor, results in two different optical designs of a plenoptic camera, also referred to as plenoptic 1.0 and plenoptic 2.0. In this work, we present a novel dataset captured with plenoptic 1.0 (Lytro Illum) and plenoptic 2.0 (Raytrix R29) cameras for the same scenes under the same conditions. The dataset provides the benchmark contents for various research and development activities for plenoptic images.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132787793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
AN ANALYSIS OF DEMOSAICING FOR PLENOPTIC CAPTURE BASED ON RAY OPTICS 基于射线光学的全光学捕获反马赛克分析
Yongwei Li, R. Olsson, Mårten Sjöström
The plenoptic camera is gaining more and more attention as it captures the 4D light field of a scene with a single shot and enables a wide range of post-processing applications. However, the pre-processing steps for captured raw data, such as demosaicing, have been overlooked. Most existing decoding pipelines for plenoptic cameras still apply demosaicing schemes which are developed for conventional cameras. In this paper, we analyze the sampling pattern of microlens-based plenoptic cameras by ray-tracing techniques and ray phase space analysis. The goal of this work is to demonstrate guidelines and principles for demosaicing the plenoptic captures by taking the unique microlens array design into account. We show that the sampling of the plenoptic camera behaves differently from that of a conventional camera and the desired demosaicing scheme is depth-dependent.
全光相机获得越来越多的关注,因为它捕捉一个场景的4D光场与一个单一的镜头,并使广泛的后处理应用。然而,捕获的原始数据的预处理步骤,如去马赛克,被忽略了。现有的全光学摄像机解码管道大多采用传统摄像机的去马赛克方案。本文采用射线跟踪技术和射线相空间分析方法对微透镜全光相机的采样模式进行了分析。这项工作的目标是通过考虑独特的微透镜阵列设计,展示反马赛克全光学捕获的指导方针和原则。我们证明了全光学相机的采样行为与传统相机的采样行为不同,所需的去马赛克方案是深度相关的。
{"title":"AN ANALYSIS OF DEMOSAICING FOR PLENOPTIC CAPTURE BASED ON RAY OPTICS","authors":"Yongwei Li, R. Olsson, Mårten Sjöström","doi":"10.1109/3DTV.2018.8478476","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478476","url":null,"abstract":"The plenoptic camera is gaining more and more attention as it captures the 4D light field of a scene with a single shot and enables a wide range of post-processing applications. However, the pre-processing steps for captured raw data, such as demosaicing, have been overlooked. Most existing decoding pipelines for plenoptic cameras still apply demosaicing schemes which are developed for conventional cameras. In this paper, we analyze the sampling pattern of microlens-based plenoptic cameras by ray-tracing techniques and ray phase space analysis. The goal of this work is to demonstrate guidelines and principles for demosaicing the plenoptic captures by taking the unique microlens array design into account. We show that the sampling of the plenoptic camera behaves differently from that of a conventional camera and the desired demosaicing scheme is depth-dependent.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125206899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ANALYSIS OF ACCOMMODATION CUES IN HOLOGRAPHIC STEREOGRAMS 全息立体图中调节线索的分析
Jani Mäkinen, E. Sahin, A. Gotchev
The simplicity of the holographic stereogram (HS) makes it an attractive option in comparison to the more complex coherent computer generated hologram (CGH) methods. The cost of its simplicity is that the HS cannot accurately reconstruct deep scenes due to the lack of correct accommodation cues. The exact nature of the accommodation cues present in HSs, however, has not been investigated. In this paper, we provide analysis of the relation between the hologram sampling properties and the perceived accommodation response. The HS can be considered as a generator of a discrete light field (LF) and can thus be examined by considering the light ray oriented nature of the hologram diffracted light. We further support the analysis by employing a numerical reconstruction tool simulating the viewing process of the human eye. The simulation results demonstrate that HSs can provide accommodation cues depending on the choice of hologram segmentation size. It is further demonstrated that the accommodation response can be enhanced at the expense of loss in perceived spatial resolution.
与更复杂的相干计算机生成全息图(CGH)方法相比,全息立体图(HS)的简单性使其成为一个有吸引力的选择。其简单性的代价是,由于缺乏正确的适应线索,HS无法准确地重建深度场景。然而,HSs中存在的调节信号的确切性质尚未得到调查。本文分析了全息图采样特性与感知调节响应之间的关系。HS可以被认为是离散光场(LF)的发生器,因此可以通过考虑全息图衍射光的光线定向性质来检查。我们进一步支持的分析,采用数值重建工具模拟人眼的观看过程。仿真结果表明,HSs可以根据全息图分割尺寸的选择提供调节提示。进一步证明,调节响应可以在感知空间分辨率损失的代价下得到增强。
{"title":"ANALYSIS OF ACCOMMODATION CUES IN HOLOGRAPHIC STEREOGRAMS","authors":"Jani Mäkinen, E. Sahin, A. Gotchev","doi":"10.1109/3DTV.2018.8478586","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478586","url":null,"abstract":"The simplicity of the holographic stereogram (HS) makes it an attractive option in comparison to the more complex coherent computer generated hologram (CGH) methods. The cost of its simplicity is that the HS cannot accurately reconstruct deep scenes due to the lack of correct accommodation cues. The exact nature of the accommodation cues present in HSs, however, has not been investigated. In this paper, we provide analysis of the relation between the hologram sampling properties and the perceived accommodation response. The HS can be considered as a generator of a discrete light field (LF) and can thus be examined by considering the light ray oriented nature of the hologram diffracted light. We further support the analysis by employing a numerical reconstruction tool simulating the viewing process of the human eye. The simulation results demonstrate that HSs can provide accommodation cues depending on the choice of hologram segmentation size. It is further demonstrated that the accommodation response can be enhanced at the expense of loss in perceived spatial resolution.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129900192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1