首页 > 最新文献

2011 International Conference on 3D Imaging (IC3D)最新文献

英文 中文
Study of asymmetric quality between coded views in depth-enhanced multiview video coding 深度增强多视点视频编码中编码视图间的不对称质量研究
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584381
P. Aflaki, D. Rusanovskyy, T. Utriainen, E. Pesonen, M. Hannuksela, Satu Jumisko-Pyykkö, M. Gabbouj
Depth-enhanced multiview video formats, such as the multiview video plus depth (MVD) format, enable a natural 3D visual experience which cannot be brought by traditional 2D or stereo video services. In this paper we studied an asymmetric MVD technique for coding of three views that enabled rendering of the same bitstream on stereoscopic displays and multiview autostereoscopic displays. A larger share of bitrate was allocated to a central view, whereas two side views were coded at lower quality. The three decoded views were used by a Depth-Image-Based Rendering algorithm (DIBR) to produce virtual intermediate views. A stereopair at a suitable separation for viewing on a stereoscopic display was selected among the synthesized views. A large-scale subjective assessment of the selected synthesized stereopair was performed. A bitrate reduction of 20% on average and up to 22% was achieved with no penalties on subjective perceived quality. In addition, our analysis shows that a similar bitrate reduction gain with no difference in subjective quality can be achieved in multiview autostereoscopic display scenario.
深度增强的多视图视频格式,如多视图视频加深度(MVD)格式,可以实现传统2D或立体视频服务无法带来的自然3D视觉体验。在本文中,我们研究了一种用于三视图编码的非对称MVD技术,该技术能够在立体显示器和多视图自立体显示器上呈现相同的比特流。更大的比特率分配给中心视图,而两个侧视图编码的质量较低。通过深度图像渲染算法(deep - image - based Rendering algorithm, DIBR)生成虚拟中间视图。在合成视图中选择适合在立体显示器上观看的距离的立体对。对所选择的合成立体对进行了大规模的主观评价。比特率平均降低了20%,最高可降低22%,而主观感知质量没有受到任何影响。此外,我们的分析表明,在多视角自动立体显示场景中,可以实现类似的比特率降低增益,而主观质量没有差异。
{"title":"Study of asymmetric quality between coded views in depth-enhanced multiview video coding","authors":"P. Aflaki, D. Rusanovskyy, T. Utriainen, E. Pesonen, M. Hannuksela, Satu Jumisko-Pyykkö, M. Gabbouj","doi":"10.1109/IC3D.2011.6584381","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584381","url":null,"abstract":"Depth-enhanced multiview video formats, such as the multiview video plus depth (MVD) format, enable a natural 3D visual experience which cannot be brought by traditional 2D or stereo video services. In this paper we studied an asymmetric MVD technique for coding of three views that enabled rendering of the same bitstream on stereoscopic displays and multiview autostereoscopic displays. A larger share of bitrate was allocated to a central view, whereas two side views were coded at lower quality. The three decoded views were used by a Depth-Image-Based Rendering algorithm (DIBR) to produce virtual intermediate views. A stereopair at a suitable separation for viewing on a stereoscopic display was selected among the synthesized views. A large-scale subjective assessment of the selected synthesized stereopair was performed. A bitrate reduction of 20% on average and up to 22% was achieved with no penalties on subjective perceived quality. In addition, our analysis shows that a similar bitrate reduction gain with no difference in subjective quality can be achieved in multiview autostereoscopic display scenario.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126042381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Geometrical 3D reconstruction using real-time RGB-D cameras 利用实时RGB-D相机进行几何三维重建
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584368
B. Penelle, Arnaud Schenkel, N. Warzée
A RGB-D image combines, for each pixel, the classical three color channels with a fourth channel providing depth information. Devices that produce RGB-D images in real time with a rather good resolution are currently available on the market. With this type of device, it is possible to acquire and to process, in real time, 3D textured information, paving the way for numerous applications in the field of computer imaging and vision. In this paper, we analyse the accuracy of a low cost system and we see how this kind of device and the RGB-D images it produces allow us to acquire 3D models of real objects. A first application is presented that combines multiple RGB-D images of a static scene, taken from different viewpoints, in order to reconstruct a complete 3D model of the scene. A second application combines on-the-fly RGB-D images coming from multiple devices, generating a 3D model where the problems of occlusions inherent in monocular observations are drastically reduced.
对于每个像素,RGB-D图像结合了经典的三个颜色通道和提供深度信息的第四个通道。目前市场上有一些设备可以实时生成RGB-D图像,分辨率相当高。有了这种类型的设备,可以实时获取和处理3D纹理信息,为计算机成像和视觉领域的众多应用铺平了道路。在本文中,我们分析了一个低成本系统的精度,我们看到这种设备和它产生的RGB-D图像如何使我们获得真实物体的三维模型。第一个应用程序是将从不同视点拍摄的静态场景的多个RGB-D图像组合在一起,以重建场景的完整3D模型。第二个应用程序结合了来自多个设备的实时RGB-D图像,生成3D模型,其中单眼观察中固有的遮挡问题大大减少。
{"title":"Geometrical 3D reconstruction using real-time RGB-D cameras","authors":"B. Penelle, Arnaud Schenkel, N. Warzée","doi":"10.1109/IC3D.2011.6584368","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584368","url":null,"abstract":"A RGB-D image combines, for each pixel, the classical three color channels with a fourth channel providing depth information. Devices that produce RGB-D images in real time with a rather good resolution are currently available on the market. With this type of device, it is possible to acquire and to process, in real time, 3D textured information, paving the way for numerous applications in the field of computer imaging and vision. In this paper, we analyse the accuracy of a low cost system and we see how this kind of device and the RGB-D images it produces allow us to acquire 3D models of real objects. A first application is presented that combines multiple RGB-D images of a static scene, taken from different viewpoints, in order to reconstruct a complete 3D model of the scene. A second application combines on-the-fly RGB-D images coming from multiple devices, generating a 3D model where the problems of occlusions inherent in monocular observations are drastically reduced.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126700065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
HELIUM3D: A laser-scanned head-tracked autostereoscopic display HELIUM3D:一种激光扫描头部跟踪的自动立体显示器
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584369
P. Surman, S. Day, E. Willman, H. Baghsiahi, I. Sexton, K. Hopf
This paper describes the development of an autostereoscopic laser-based display that can serve several users who are able to move freely over a large area; it is intended for television applications and operates by tracking the positions of the viewers' heads and directing regions referred to as exit pupils towards their eyes. An exit pupil is a region where either a left or a right image is seen across the complete area of the screen. A description of the 60 and 120Hz prototypes developed is given. A set-up that does not show images but demonstrates the operation of the novel dynamic exit pupil formation system is also described,
本文描述了一种基于激光的自动立体显示器的发展,该显示器可以服务于能够在大面积上自由移动的多个用户;它是为电视应用而设计的,通过跟踪观众头部的位置并将被称为出口瞳孔的区域指向他们的眼睛来操作。出瞳是一个区域,在这里可以看到左边或右边的图像横跨整个屏幕区域。给出了60 hz和120Hz原型机的描述。还描述了一种不显示图像但演示新型动态出瞳形成系统操作的设置。
{"title":"HELIUM3D: A laser-scanned head-tracked autostereoscopic display","authors":"P. Surman, S. Day, E. Willman, H. Baghsiahi, I. Sexton, K. Hopf","doi":"10.1109/IC3D.2011.6584369","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584369","url":null,"abstract":"This paper describes the development of an autostereoscopic laser-based display that can serve several users who are able to move freely over a large area; it is intended for television applications and operates by tracking the positions of the viewers' heads and directing regions referred to as exit pupils towards their eyes. An exit pupil is a region where either a left or a right image is seen across the complete area of the screen. A description of the 60 and 120Hz prototypes developed is given. A set-up that does not show images but demonstrates the operation of the novel dynamic exit pupil formation system is also described,","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125251487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Stereo image rectification algorithm for multi-view 3D display 多视角三维显示的立体图像校正算法
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584365
Hao Cheng, P. An, Hejian Li, Zhaoyang Zhang
This paper introduces a stereo image rectification algorithm based on image feature points for multi-view three-dimension (3D) display system. First, we mark the feature points in multi-view images, and calculate the rectification parameters. Then we use these parameters for rotating image and shifting image in vertical direction in order to eliminate the vertical parallax of multi-view images. Finally according to zero-parallax setting (ZPS), we adjust the horizontal parallax to get a better stereo image for multi-view 3D display. After the multi-view information is corrected, the stereo effect has been greatly enhanced in multi-view display system. The algorithm has low complexity in real-time 3D system and can improve the stereo image from the point of the observer.
介绍了一种基于图像特征点的多视点三维显示系统立体图像校正算法。首先对多视点图像进行特征点标记,并计算校正参数;然后利用这些参数对图像进行垂直方向的旋转和移动,以消除多视点图像的垂直视差。最后根据零视差设置(zero parallax setting, ZPS)对水平视差进行调整,得到更好的立体图像,用于多视角3D显示。在多视点显示系统中,对多视点信息进行校正后,立体效果大大增强。该算法在实时三维系统中具有较低的复杂度,可以从观察者的角度改善立体图像。
{"title":"Stereo image rectification algorithm for multi-view 3D display","authors":"Hao Cheng, P. An, Hejian Li, Zhaoyang Zhang","doi":"10.1109/IC3D.2011.6584365","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584365","url":null,"abstract":"This paper introduces a stereo image rectification algorithm based on image feature points for multi-view three-dimension (3D) display system. First, we mark the feature points in multi-view images, and calculate the rectification parameters. Then we use these parameters for rotating image and shifting image in vertical direction in order to eliminate the vertical parallax of multi-view images. Finally according to zero-parallax setting (ZPS), we adjust the horizontal parallax to get a better stereo image for multi-view 3D display. After the multi-view information is corrected, the stereo effect has been greatly enhanced in multi-view display system. The algorithm has low complexity in real-time 3D system and can improve the stereo image from the point of the observer.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132278322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A fast segmentation-driven algorithm for accurate stereo correspondence 一种快速分割驱动的精确立体对应算法
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584384
S. Mattoccia, Leonardo De-Maeztu
Recent cost aggregation strategies that adapt their weights to image content enabled local algorithms to obtain results comparable to those of global algorithms based on more complex disparity optimization methods. Unfortunately, despite the potential advantages in terms of memory footprint and algorithmic simplicity compared to global algorithms, most of the state-of-the-art cost aggregation strategies deployed in local algorithms are extremely slow. In fact, their execution time is comparable and often worse than those of global approaches. In this paper we propose a framework for accurate and fast cost aggregation based on segmentation that allows us to obtain results comparable to state-of-the-art approaches much more efficiently (the execution time drops from minutes to seconds). A further speed-up is achieved taking advantage of multi-core capabilities available nowadays in almost any processor. The comparison with state-of-the-art cost aggregation strategies highlights the effectiveness of our proposal.
最近的成本聚合策略使其权重适应图像内容,使局部算法能够获得与基于更复杂的视差优化方法的全局算法相当的结果。不幸的是,尽管与全局算法相比,在内存占用和算法简单性方面具有潜在优势,但部署在本地算法中的大多数最先进的成本聚合策略都非常缓慢。事实上,它们的执行时间是相当的,而且往往比全局方法更差。在本文中,我们提出了一个基于细分的准确和快速成本聚合框架,使我们能够更有效地获得与最先进方法相当的结果(执行时间从几分钟降至几秒钟)。利用目前几乎所有处理器中可用的多核功能,实现了进一步的加速。与最先进的成本汇总策略的比较突出了我们建议的有效性。
{"title":"A fast segmentation-driven algorithm for accurate stereo correspondence","authors":"S. Mattoccia, Leonardo De-Maeztu","doi":"10.1109/IC3D.2011.6584384","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584384","url":null,"abstract":"Recent cost aggregation strategies that adapt their weights to image content enabled local algorithms to obtain results comparable to those of global algorithms based on more complex disparity optimization methods. Unfortunately, despite the potential advantages in terms of memory footprint and algorithmic simplicity compared to global algorithms, most of the state-of-the-art cost aggregation strategies deployed in local algorithms are extremely slow. In fact, their execution time is comparable and often worse than those of global approaches. In this paper we propose a framework for accurate and fast cost aggregation based on segmentation that allows us to obtain results comparable to state-of-the-art approaches much more efficiently (the execution time drops from minutes to seconds). A further speed-up is achieved taking advantage of multi-core capabilities available nowadays in almost any processor. The comparison with state-of-the-art cost aggregation strategies highlights the effectiveness of our proposal.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"1993 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125539797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Multi-view photometric stereo of non-Lambertian surface under general illuminations 一般光照下非朗伯曲面的多视点光度立体
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584370
Guannan Li, Yebin Liu, Qionghai Dai
We present an approach to reconstruct 3D fine-scale surface models for non-Lambertian objects from multi-view multi-illumination image sets. Unlike most previous work in photometric stereo, this approach works for general lighting conditions, i.e. natural outdoor illumination. Our method begins with a raw 3D model reconstructed from available multi-view stereo techniques. Considering the sparse characteristics of surface reflectance in the view-illumination space, we first estimate the diffuse appearance of the 3D model from the multiview captured images, and then refine it using the surface appearance under varying illuminations. With the separated low rank diffuse component, we exploit the photometric cues to recover detailed surface structure. Experimental results on various real world scenes validate that the proposed method is able to handle surfaces with specular reflectance even including saturated colours, highlight and cast-shadows.
提出了一种基于多视点多光照图像集的非朗伯物体三维精细尺度表面模型重建方法。不像大多数以前的工作在光度立体,这种方法适用于一般照明条件,即自然户外照明。我们的方法从可用的多视图立体技术重建的原始3D模型开始。考虑到视点光照空间中表面反射率的稀疏特性,我们首先从多视点捕获的图像中估计三维模型的漫反射外观,然后利用不同光照下的表面外观对其进行细化。利用分离出的低阶漫射分量,我们利用光度线索来恢复详细的表面结构。在各种真实场景中的实验结果验证了所提出的方法能够处理具有镜面反射的表面,甚至包括饱和颜色、高光和阴影。
{"title":"Multi-view photometric stereo of non-Lambertian surface under general illuminations","authors":"Guannan Li, Yebin Liu, Qionghai Dai","doi":"10.1109/IC3D.2011.6584370","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584370","url":null,"abstract":"We present an approach to reconstruct 3D fine-scale surface models for non-Lambertian objects from multi-view multi-illumination image sets. Unlike most previous work in photometric stereo, this approach works for general lighting conditions, i.e. natural outdoor illumination. Our method begins with a raw 3D model reconstructed from available multi-view stereo techniques. Considering the sparse characteristics of surface reflectance in the view-illumination space, we first estimate the diffuse appearance of the 3D model from the multiview captured images, and then refine it using the surface appearance under varying illuminations. With the separated low rank diffuse component, we exploit the photometric cues to recover detailed surface structure. Experimental results on various real world scenes validate that the proposed method is able to handle surfaces with specular reflectance even including saturated colours, highlight and cast-shadows.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131494951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The floating window, its benefits, methods, requirements 浮动窗口,它的好处,方法,要求
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584363
O. Cahen
Floating window means projecting the window ahead from the screen. Benefits, methods, requirements for comfortable viewing conditions, are reviewed.
浮动窗口是指将窗口从屏幕上伸出来。对舒适观看条件的好处、方法和要求进行了综述。
{"title":"The floating window, its benefits, methods, requirements","authors":"O. Cahen","doi":"10.1109/IC3D.2011.6584363","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584363","url":null,"abstract":"Floating window means projecting the window ahead from the screen. Benefits, methods, requirements for comfortable viewing conditions, are reviewed.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132420698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tao, a 3D dynamic document description language 一种三维动态文档描述语言
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584383
Christophe de Dinechin, Catherine Burvelle, Jerome Forissier
Tao Presentations uses a dialect of the XL programming language to describe interactive multimedia 3D documents. This approach makes it easy to create 3D contents that can be used to present information, to visualize scientific data or to explore stereoscopic or auto-stereoscopic effects rapidly. The demands of an interactive, real-time environment have created a number of interesting challenges for us to solve, ranging from language expressiveness and document semantics to graphics performance and rendering quality.
Tao Presentations使用XL编程语言的一种方言来描述交互式多媒体3D文档。这种方法可以很容易地创建3D内容,用于呈现信息、可视化科学数据或快速探索立体或自动立体效果。交互式、实时环境的需求给我们带来了许多有趣的挑战,从语言表现力和文档语义到图形性能和渲染质量。
{"title":"Tao, a 3D dynamic document description language","authors":"Christophe de Dinechin, Catherine Burvelle, Jerome Forissier","doi":"10.1109/IC3D.2011.6584383","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584383","url":null,"abstract":"Tao Presentations uses a dialect of the XL programming language to describe interactive multimedia 3D documents. This approach makes it easy to create 3D contents that can be used to present information, to visualize scientific data or to explore stereoscopic or auto-stereoscopic effects rapidly. The demands of an interactive, real-time environment have created a number of interesting challenges for us to solve, ranging from language expressiveness and document semantics to graphics performance and rendering quality.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133540433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three dimensional imaging for through-the-wall human sensing 用于穿墙式人体传感的三维成像
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584387
P. K. Kumar, T. Kumar
Through the wall sensing is seeing a rapid growth with the present day technology. Particularly Ultra Wideband (UWB) technology paves the way for this. Three dimensional imaging of humans behind the walls, foliage or any other rubble gives information which can help to save lives. This paper simulates a human like structure with the environment of being behind wall and extracts the three dimensional imaging of the human structure. Electromagnetic signals were transmitted and based on the received echoes after processing a 3D Imaging is obtained. Signal processing aspects before imaging and the method used for obtaining the tree dimensional imaging is also discussed.
随着当今技术的发展,穿墙传感正在迅速发展。特别是超宽带(UWB)技术为此铺平了道路。墙、树叶或任何其他瓦砾后面的人的三维成像提供了有助于拯救生命的信息。本文在墙后环境下模拟仿人结构,提取人体结构的三维图像。发射电磁信号,对接收到的回波进行处理后,得到三维成像。并讨论了成像前的信号处理和获得三维成像的方法。
{"title":"Three dimensional imaging for through-the-wall human sensing","authors":"P. K. Kumar, T. Kumar","doi":"10.1109/IC3D.2011.6584387","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584387","url":null,"abstract":"Through the wall sensing is seeing a rapid growth with the present day technology. Particularly Ultra Wideband (UWB) technology paves the way for this. Three dimensional imaging of humans behind the walls, foliage or any other rubble gives information which can help to save lives. This paper simulates a human like structure with the environment of being behind wall and extracts the three dimensional imaging of the human structure. Electromagnetic signals were transmitted and based on the received echoes after processing a 3D Imaging is obtained. Signal processing aspects before imaging and the method used for obtaining the tree dimensional imaging is also discussed.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131158869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A digital 3D signage system and its effect on customer behavior 数字3D标识系统及其对客户行为的影响
Pub Date : 2011-12-01 DOI: 10.1109/IC3D.2011.6584378
Mårten Sjöström, R. Olsson, Rolf Dalin
The use of digital signs simplifies distribution. Importantly, it draws more attention than static signs. A way to increase attention is to add an experienced depth. The paper discusses possible alternatives for extending an existing digital signage system to display stereoscopic 3D contents, comparing a decentralized distribution solution and a centralized solution. A functional prototype system was implemented. A new 3D player was developed to render views from different formats. The implemented system was used to study customer behavior when exposed to digital stereoscopic 3D signage in a direct sales situation. The proportion of sales of selected products related to the total number of sold products varied approximately equally before and during tests. An interview study suggests that the sign did not interact with customer decisions: customers were lost at different stages in this series of steps, among others the sign placement.
数字标志的使用简化了分配。重要的是,它比静态标志更能吸引人们的注意。增加注意力的一个方法是增加体验深度。本文讨论了扩展现有数字标牌系统以显示立体3D内容的可能替代方案,比较了分散分布解决方案和集中解决方案。实现了一个功能原型系统。开发了一种新的3D播放器来渲染不同格式的视图。所实施的系统被用于研究在直接销售情况下暴露于数字立体3D标牌时的客户行为。在测试之前和测试期间,选定产品的销售额与销售产品总数的比例变化大致相等。一项访谈研究表明,标识与客户的决策没有交互作用:在这一系列步骤的不同阶段,客户会丢失,其中包括标识的放置。
{"title":"A digital 3D signage system and its effect on customer behavior","authors":"Mårten Sjöström, R. Olsson, Rolf Dalin","doi":"10.1109/IC3D.2011.6584378","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584378","url":null,"abstract":"The use of digital signs simplifies distribution. Importantly, it draws more attention than static signs. A way to increase attention is to add an experienced depth. The paper discusses possible alternatives for extending an existing digital signage system to display stereoscopic 3D contents, comparing a decentralized distribution solution and a centralized solution. A functional prototype system was implemented. A new 3D player was developed to render views from different formats. The implemented system was used to study customer behavior when exposed to digital stereoscopic 3D signage in a direct sales situation. The proportion of sales of selected products related to the total number of sold products varied approximately equally before and during tests. An interview study suggests that the sign did not interact with customer decisions: customers were lost at different stages in this series of steps, among others the sign placement.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124344258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2011 International Conference on 3D Imaging (IC3D)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1