首页 > 最新文献

IVMSP 2013最新文献

英文 中文
Detecting moving spheres in 3D point clouds via the 3D velocity Hough Transform 利用三维速度霍夫变换检测三维点云中的运动球体
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611895
Anas Abuzaina, Thamer S. Alathari, M. Nixon, J. Carter
We present a new approach to extracting moving spheres from a sequence of 3D point clouds. The new 3D velocity Hough Transform (3DVHT) incorporates motion parameters in addition to structural parameters in an evidence gathering process to accurately detect moving spheres at any given point cloud from the sequence. We demonstrate its capability to detect spheres which are obscured within the sequence of point clouds, which conventional approaches cannot achieve. We apply our algorithm on real and synthetic data and demonstrate the ability of detecting fully occluded spheres by exploiting inter-frame correlation within the 3D point cloud sequence.
提出了一种从三维点云序列中提取运动球体的新方法。新的3D速度霍夫变换(3DVHT)在证据收集过程中结合了运动参数和结构参数,可以准确地从序列中检测任何给定点云上的运动球体。我们证明了它能够检测在点云序列中被遮挡的球体,这是传统方法无法实现的。我们将算法应用于真实和合成数据,并通过利用三维点云序列内的帧间相关性来证明检测完全遮挡球体的能力。
{"title":"Detecting moving spheres in 3D point clouds via the 3D velocity Hough Transform","authors":"Anas Abuzaina, Thamer S. Alathari, M. Nixon, J. Carter","doi":"10.1109/IVMSPW.2013.6611895","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611895","url":null,"abstract":"We present a new approach to extracting moving spheres from a sequence of 3D point clouds. The new 3D velocity Hough Transform (3DVHT) incorporates motion parameters in addition to structural parameters in an evidence gathering process to accurately detect moving spheres at any given point cloud from the sequence. We demonstrate its capability to detect spheres which are obscured within the sequence of point clouds, which conventional approaches cannot achieve. We apply our algorithm on real and synthetic data and demonstrate the ability of detecting fully occluded spheres by exploiting inter-frame correlation within the 3D point cloud sequence.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128239876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A dataset of Kinect-based 3D scans 基于kinect的3D扫描数据集
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611937
Alexandros Doumanoglou, S. Asteriadis, D. Alexiadis, D. Zarpalas, P. Daras
Hereby, a new publicly available 3D reconstruction-oriented dataset is presented. It consists of multi-view range scans of small-sized objects using a turntable. Range scans were captured using a Microsoft Kinect sensor, as well as an accurate laser scanner (Vivid VI-700 Non-contact 3D Digitizer), whose reconstructions can serve as ground-truth data. The construction of this dataset was motivated by the lack of a relevant Kinect dataset, despite the fact that Kinect has attracted the attention of many researchers and home enthusiasts. Thus, the core idea behind the construction of this dataset, is to allow the validation of 3D surface reconstruction methodologies for point sets extracted using Kinect sensors. The dataset consists of multi-view range scans of 59 objects, along with the necessary calibration information that can be used for experimentation in the field of 3D reconstruction from Kinect depth data. Two well-known 3D reconstruction methods were selected and applied on the dataset, in order to demonstrate its applicability in the 3D reconstruction field, as well as the challenges that arise. Additionally, the appropriate 3D reconstruction evaluation methodology is presented. Finally, as the dataset comes in classes of similar objects, it can also be used for classification purposes, using the provided 2.5D/3D features.
在此基础上,提出了一种新的面向公众的面向三维重建的数据集。它包括使用转盘对小型物体进行多视图范围扫描。使用微软Kinect传感器和精确的激光扫描仪(Vivid VI-700非接触式3D数字化仪)捕获距离扫描,其重建可以作为地面真实数据。尽管Kinect已经吸引了许多研究人员和家庭爱好者的注意,但由于缺乏相关的Kinect数据集,该数据集的构建受到了激励。因此,构建该数据集背后的核心思想是允许使用Kinect传感器提取的点集的3D表面重建方法的验证。该数据集包括对59个物体的多视角范围扫描,以及可用于Kinect深度数据3D重建领域实验的必要校准信息。选择了两种知名的三维重建方法,并对数据集进行了应用,以证明其在三维重建领域的适用性,以及所面临的挑战。此外,提出了合适的三维重建评价方法。最后,由于数据集来自相似对象的类别,它也可以用于分类目的,使用提供的2.5D/3D特征。
{"title":"A dataset of Kinect-based 3D scans","authors":"Alexandros Doumanoglou, S. Asteriadis, D. Alexiadis, D. Zarpalas, P. Daras","doi":"10.1109/IVMSPW.2013.6611937","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611937","url":null,"abstract":"Hereby, a new publicly available 3D reconstruction-oriented dataset is presented. It consists of multi-view range scans of small-sized objects using a turntable. Range scans were captured using a Microsoft Kinect sensor, as well as an accurate laser scanner (Vivid VI-700 Non-contact 3D Digitizer), whose reconstructions can serve as ground-truth data. The construction of this dataset was motivated by the lack of a relevant Kinect dataset, despite the fact that Kinect has attracted the attention of many researchers and home enthusiasts. Thus, the core idea behind the construction of this dataset, is to allow the validation of 3D surface reconstruction methodologies for point sets extracted using Kinect sensors. The dataset consists of multi-view range scans of 59 objects, along with the necessary calibration information that can be used for experimentation in the field of 3D reconstruction from Kinect depth data. Two well-known 3D reconstruction methods were selected and applied on the dataset, in order to demonstrate its applicability in the 3D reconstruction field, as well as the challenges that arise. Additionally, the appropriate 3D reconstruction evaluation methodology is presented. Finally, as the dataset comes in classes of similar objects, it can also be used for classification purposes, using the provided 2.5D/3D features.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129629536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Compensation for inter-reflection and control of reflection coefficient for directional diffuse object in photometric stereo 光度立体定向漫射物体间反射补偿与反射系数控制
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611910
O. Ikeda
In photometric stereo two of the common problems are inter-reflections and non-Lambertian reflection. The former locally blurs the images, resulting in a locally distorted shape, where the effects may be more significant as the object structure is more complicated. The latter, on the other hand, gives global distortions and it may be represented by directional diffuse reflection. In this paper, we present an image processing method to reduce the effects of inter-reflections for directional diffuse reflection object. The method is described mathematically, and experimental results are given to examine the method.
在光度立体中,两个常见的问题是间反射和非朗伯反射。前者使图像局部模糊,造成局部形状扭曲,随着物体结构的复杂,其影响可能更为显著。另一方面,后者给出了全局扭曲,它可以用定向漫反射来表示。本文提出了一种减少定向漫反射物体间反射影响的图像处理方法。对该方法进行了数学描述,并给出了实验结果。
{"title":"Compensation for inter-reflection and control of reflection coefficient for directional diffuse object in photometric stereo","authors":"O. Ikeda","doi":"10.1109/IVMSPW.2013.6611910","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611910","url":null,"abstract":"In photometric stereo two of the common problems are inter-reflections and non-Lambertian reflection. The former locally blurs the images, resulting in a locally distorted shape, where the effects may be more significant as the object structure is more complicated. The latter, on the other hand, gives global distortions and it may be represented by directional diffuse reflection. In this paper, we present an image processing method to reduce the effects of inter-reflections for directional diffuse reflection object. The method is described mathematically, and experimental results are given to examine the method.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130107624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time, realistic full-body 3D reconstruction and texture mapping from multiple Kinects 实时,逼真的全身3D重建和纹理映射从多个kinect
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611939
D. Alexiadis, D. Zarpalas, P. Daras
Multi-party 3D Tele-Immersive (TI) environments, supporting realistic interaction among distant users, is the future of tele-conferencing. Real-time, full-body 3D reconstruction, an important task for TI applications, is addressed in this paper. A volumetric method for the reconstruction of watertight models of moving humans is presented, along with details for appropriate texture-mapping to enhance the visual quality. The reconstruction uses the input from multiple consumer depth cameras and specifically Kinect sensors. The presented results verify the effectiveness of the proposed methodologies, with respect to the visual quality and frame rates.
多方3D远程沉浸式(TI)环境,支持远程用户之间的真实交互,是远程会议的未来。实时、全身三维重建是TI应用的一个重要任务。提出了一种用于运动人体水密模型重建的体积方法,以及适当的纹理映射的细节,以提高视觉质量。重建使用来自多个消费者深度摄像头和Kinect传感器的输入。实验结果验证了所提方法在视觉质量和帧率方面的有效性。
{"title":"Real-time, realistic full-body 3D reconstruction and texture mapping from multiple Kinects","authors":"D. Alexiadis, D. Zarpalas, P. Daras","doi":"10.1109/IVMSPW.2013.6611939","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611939","url":null,"abstract":"Multi-party 3D Tele-Immersive (TI) environments, supporting realistic interaction among distant users, is the future of tele-conferencing. Real-time, full-body 3D reconstruction, an important task for TI applications, is addressed in this paper. A volumetric method for the reconstruction of watertight models of moving humans is presented, along with details for appropriate texture-mapping to enhance the visual quality. The reconstruction uses the input from multiple consumer depth cameras and specifically Kinect sensors. The presented results verify the effectiveness of the proposed methodologies, with respect to the visual quality and frame rates.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116073607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Seam carving for stereoscopic video 缝雕刻立体视频
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611898
B. Guthier, J. Kiess, S. Kopf, W. Effelsberg
In this paper, we present a novel technique for seam carving of stereoscopic video. It removes seams of pixels in areas that are most likely not noticed by the viewer. When applying seam carving to stereoscopic video rather than monoscopic still images, new challenges arise. The detected seams must be consistent between the left and the right view, so that no depth information is destroyed. When removing seams in two consecutive frames, temporal consistency between the removed seams must be established to avoid flicker in the resulting video. By making certain assumptions, the available depth information can be harnessed to improve the quality achieved by seam carving. Assuming that closer pixels are more important, the algorithm can focus on removing distant pixels first. Furthermore, we assume that coherent pixels belonging to the same object have similar depth. By avoiding to cut through edges in the depth map, we can thus avoid cutting through object boundaries.
本文提出了一种新的立体视频拼接技术。它消除了最可能不被观看者注意到的区域的像素接缝。当将接缝雕刻应用于立体视频而不是单镜静态图像时,出现了新的挑战。探测到的煤层在左右视图之间必须是一致的,这样深度信息才不会被破坏。当在两个连续帧中移除接缝时,必须建立被移除接缝之间的时间一致性,以避免产生的视频中出现闪烁。通过一定的假设,可以利用现有的深度信息来提高切缝的质量。假设较近的像素更重要,该算法可以首先集中精力去除较远的像素。此外,我们假设属于同一物体的相干像素具有相似的深度。通过避免在深度图中切割边缘,我们可以避免切割物体边界。
{"title":"Seam carving for stereoscopic video","authors":"B. Guthier, J. Kiess, S. Kopf, W. Effelsberg","doi":"10.1109/IVMSPW.2013.6611898","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611898","url":null,"abstract":"In this paper, we present a novel technique for seam carving of stereoscopic video. It removes seams of pixels in areas that are most likely not noticed by the viewer. When applying seam carving to stereoscopic video rather than monoscopic still images, new challenges arise. The detected seams must be consistent between the left and the right view, so that no depth information is destroyed. When removing seams in two consecutive frames, temporal consistency between the removed seams must be established to avoid flicker in the resulting video. By making certain assumptions, the available depth information can be harnessed to improve the quality achieved by seam carving. Assuming that closer pixels are more important, the algorithm can focus on removing distant pixels first. Furthermore, we assume that coherent pixels belonging to the same object have similar depth. By avoiding to cut through edges in the depth map, we can thus avoid cutting through object boundaries.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132477814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Depth down/up-sampling using hybrid correlation for depth coding 深度下/上采样使用混合相关深度编码
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611902
Yaoxue Xing, Yao Zhao, Chunyu Lin, H. Bai, Chao Yao
In this paper, the spatial correlation and temporal consistency are jointly employed in depth down/up-sampling to achieve efficient depth coding. Given that the temporal consistency of the depth maps, a direct down-sampling method that the pixels at different positions are sampled between the adjacent frames is applied. After down-sampling, High Efficiency Video Coding (HEVC) is adopted to encode and decode the down-sampled depth maps for its good compression efficiency. Then, the proposed up-sampling algorithm which considers the spatial correlation among the neighbor pixels in one depth map and the temporal consistency between the adjacent depth frames, as well as the correlation between the depth map and its corresponding texture image is utilized to get a good quality depth map with full resolution. Experimental results show that the proposed algorithm improves both depth map coding performance and synthesized quality.
本文将空间相关性和时间一致性结合到深度上下采样中,实现了高效的深度编码。考虑到深度图的时间一致性,采用直接下采样的方法,在相邻的帧之间采样不同位置的像素。下采样后,采用高效视频编码(High Efficiency Video Coding, HEVC)对下采样深度图进行编码解码,压缩效率高。然后,利用所提出的上采样算法,综合考虑同一深度图中相邻像素之间的空间相关性和相邻深度帧之间的时间一致性,以及深度图与其对应纹理图像之间的相关性,得到高质量的全分辨率深度图。实验结果表明,该算法提高了深度图编码性能和合成质量。
{"title":"Depth down/up-sampling using hybrid correlation for depth coding","authors":"Yaoxue Xing, Yao Zhao, Chunyu Lin, H. Bai, Chao Yao","doi":"10.1109/IVMSPW.2013.6611902","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611902","url":null,"abstract":"In this paper, the spatial correlation and temporal consistency are jointly employed in depth down/up-sampling to achieve efficient depth coding. Given that the temporal consistency of the depth maps, a direct down-sampling method that the pixels at different positions are sampled between the adjacent frames is applied. After down-sampling, High Efficiency Video Coding (HEVC) is adopted to encode and decode the down-sampled depth maps for its good compression efficiency. Then, the proposed up-sampling algorithm which considers the spatial correlation among the neighbor pixels in one depth map and the temporal consistency between the adjacent depth frames, as well as the correlation between the depth map and its corresponding texture image is utilized to get a good quality depth map with full resolution. Experimental results show that the proposed algorithm improves both depth map coding performance and synthesized quality.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124935628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Trilateral filter construction for depth map upsampling 用于深度图上采样的三边滤波器构建
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611911
Jaekwang Kim, Jaeho Lee, Seung-Ryong Han, Dowan Kim, Jongsul Min, Changick Kim
In recent years, fusion camera systems that consist of color cameras and Time-of-Flight (TOF) depth sensors have been popularly used due to its depth sensing capability at real-time frame rates. However, captured depth maps are limited in low resolution compared to the corresponding color images due to physical limitation of the TOF depth sensor. Although many algorithms have been proposed, they still yield erroneous results, especially when boundaries of the depth map and the color image are not aligned. We therefore propose a novel kernel regression framework to generate the high quality depth map. Our proposed filter is based on the vector pointing homogeneous pixels that represents the unit vector toward similar neighbors in the local region. The vectors are used to detect misaligned regions between color edges and depth edges. Experimental comparisons with other data fusion techniques prove the superiority of the proposed algorithm.
近年来,由彩色相机和TOF (Time-of-Flight)深度传感器组成的融合相机系统由于其在实时帧速率下的深度传感能力而得到了广泛的应用。然而,由于TOF深度传感器的物理限制,与相应的彩色图像相比,捕获的深度图的分辨率较低。虽然已经提出了许多算法,但它们仍然会产生错误的结果,特别是当深度图和彩色图像的边界不对齐时。因此,我们提出了一种新的核回归框架来生成高质量的深度图。我们提出的滤波器是基于指向同质像素的向量,这些像素表示在局部区域中指向相似邻居的单位向量。这些向量用于检测颜色边缘和深度边缘之间的不对齐区域。通过与其他数据融合技术的实验比较,证明了该算法的优越性。
{"title":"Trilateral filter construction for depth map upsampling","authors":"Jaekwang Kim, Jaeho Lee, Seung-Ryong Han, Dowan Kim, Jongsul Min, Changick Kim","doi":"10.1109/IVMSPW.2013.6611911","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611911","url":null,"abstract":"In recent years, fusion camera systems that consist of color cameras and Time-of-Flight (TOF) depth sensors have been popularly used due to its depth sensing capability at real-time frame rates. However, captured depth maps are limited in low resolution compared to the corresponding color images due to physical limitation of the TOF depth sensor. Although many algorithms have been proposed, they still yield erroneous results, especially when boundaries of the depth map and the color image are not aligned. We therefore propose a novel kernel regression framework to generate the high quality depth map. Our proposed filter is based on the vector pointing homogeneous pixels that represents the unit vector toward similar neighbors in the local region. The vectors are used to detect misaligned regions between color edges and depth edges. Experimental comparisons with other data fusion techniques prove the superiority of the proposed algorithm.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"311 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122495397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Automatic object-based 2D-to-3D conversion 自动基于对象的2d到3d转换
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611904
Hak Gu Kim, B. Song
This paper presents a new 2D-to-3D conversion method based on structure from motion (SfM) in order to relieve the visual fatigue and simultaneously improve the three-dimensional effect (3D effect). First, we obtain the 3D information such as camera positions and depth values via SfM. Then, we segment an input image, and find the nearest object region. Next, the projective matrix of the nearest object is computed. Finally, the nearest object is warped using the computed projective matrix, and the other regions are properly warped according to their depth values. Experimental results show that the proposed method can synthesize stereo views better than the other methods.
为了减轻视觉疲劳,同时提高三维效果,提出了一种基于运动转化结构(SfM)的二维到三维转换方法。首先,我们通过SfM获取相机位置和深度值等三维信息。然后,我们分割输入图像,并找到最近的目标区域。然后,计算最近目标的投影矩阵。最后,使用计算的投影矩阵对最近的对象进行弯曲,其他区域根据其深度值进行适当的弯曲。实验结果表明,该方法能较好地合成立体图像。
{"title":"Automatic object-based 2D-to-3D conversion","authors":"Hak Gu Kim, B. Song","doi":"10.1109/IVMSPW.2013.6611904","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611904","url":null,"abstract":"This paper presents a new 2D-to-3D conversion method based on structure from motion (SfM) in order to relieve the visual fatigue and simultaneously improve the three-dimensional effect (3D effect). First, we obtain the 3D information such as camera positions and depth values via SfM. Then, we segment an input image, and find the nearest object region. Next, the projective matrix of the nearest object is computed. Finally, the nearest object is warped using the computed projective matrix, and the other regions are properly warped according to their depth values. Experimental results show that the proposed method can synthesize stereo views better than the other methods.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125370283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-source inverse geometry CT(MS-IGCT) system: A new concept of 3D CT imaging 多源逆几何CT(MS-IGCT)系统:三维CT成像的新概念
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611894
Shinkook Choi, J. Baek
Third-generation CT architectures are approaching fundamental limits. While alternative architectures such as electron beam CT and dual energy CT has been proposed, they have severe tradeoffs in terms of image quality, dose-efficiency, and complexity. In this work, we present the concept of multi-source inverse-geometry CT(MS-IGCT) system which overcomes several limits of current CT architectures, and The 3D reconstruction algorithm and initial experimental results of the MS-IGCT system are also presented.
第三代CT架构正在接近基本极限。虽然已经提出了电子束CT和双能CT等替代架构,但它们在图像质量、剂量效率和复杂性方面存在严重的权衡。在这项工作中,我们提出了多源反几何CT(MS-IGCT)系统的概念,克服了当前CT架构的一些限制,并给出了MS-IGCT系统的三维重建算法和初步实验结果。
{"title":"Multi-source inverse geometry CT(MS-IGCT) system: A new concept of 3D CT imaging","authors":"Shinkook Choi, J. Baek","doi":"10.1109/IVMSPW.2013.6611894","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611894","url":null,"abstract":"Third-generation CT architectures are approaching fundamental limits. While alternative architectures such as electron beam CT and dual energy CT has been proposed, they have severe tradeoffs in terms of image quality, dose-efficiency, and complexity. In this work, we present the concept of multi-source inverse-geometry CT(MS-IGCT) system which overcomes several limits of current CT architectures, and The 3D reconstruction algorithm and initial experimental results of the MS-IGCT system are also presented.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114976390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Autostereoscopic display with a shifted LC barrier structure for a wide viewing zone 具有移位LC屏障结构的自动立体显示器,具有宽的观看区
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611893
Kihyung Kang, Seondeok Hwang, J. Yoon, Dongchoon Hwang, Soobae Moon
In this paper, we propose a shifted ITO electrode structure of a LC barrier for additional sweet spots in the auto-stereoscopic 3D display. The shifted ITO electrode is consisted of vertically inter-digital ITO electrodes both bottom and top electrodes. They are assembled with horizontally shifted by a half pitch. We can drive each electrode according to the viewer position. It gives an effect of varying the viewing zone. By doing this, we can widen the viewing zone with head tracking technique. It can give a viewer some freedom of viewing position with an image of high quality.
在本文中,我们提出了一种LC势垒的移位ITO电极结构,用于自动立体3D显示器中的额外甜点。移位ITO电极由上下两个垂直数字间的ITO电极组成。它们是水平移动半节距组装的。我们可以根据观察者的位置驱动每个电极。它产生了改变观看区域的效果。通过这样做,我们可以通过头部跟踪技术扩大观看区域。它可以给观看者一定的观看位置自由,获得高质量的图像。
{"title":"Autostereoscopic display with a shifted LC barrier structure for a wide viewing zone","authors":"Kihyung Kang, Seondeok Hwang, J. Yoon, Dongchoon Hwang, Soobae Moon","doi":"10.1109/IVMSPW.2013.6611893","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611893","url":null,"abstract":"In this paper, we propose a shifted ITO electrode structure of a LC barrier for additional sweet spots in the auto-stereoscopic 3D display. The shifted ITO electrode is consisted of vertically inter-digital ITO electrodes both bottom and top electrodes. They are assembled with horizontally shifted by a half pitch. We can drive each electrode according to the viewer position. It gives an effect of varying the viewing zone. By doing this, we can widen the viewing zone with head tracking technique. It can give a viewer some freedom of viewing position with an image of high quality.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132994111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IVMSP 2013
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1