首页 > 最新文献

2014 International Conference on 3D Imaging (IC3D)最新文献

英文 中文
Development and validation of a 3D kinematic-based method for determining gait events during overground walking 开发和验证基于三维运动学的方法确定步态事件在地上行走
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032604
M. Boutaayamou, C. Schwartz, V. Denoël, B. Forthomme, J. Croisier, G. Garraux, J. Verly, O. Brüls
A new signal processing algorithm is developed for quantifying heel strike (HS) and toe-off (TO) event times solely from measured heel and toe coordinates during overground walking. It is based on a rough estimation of relevant local 3D position signals. An original piecewise linear fitting method is applied to these local signals to accurately identify HS and TO times without the need of using arbitrary experimental coefficients. We validated the proposed method with nine healthy subjects and a total of 322 trials. The extracted temporal gait events were compared to reference data obtained from a force plate. HS and TO times were identified with a temporal accuracy ± precision of 0.3 ms ± 7.1 ms, and -2.8 ms ± 7.2 ms in comparison with reference data defined with a force threshold of 10 N. This algorithm improves the accuracy of the HS and TO detection. Furthermore, it can be used to perform stride-by-stride analysis during overground walking with only recorded heel and toe coordinates.
提出了一种新的信号处理算法,仅根据地面行走中测量的脚跟和脚趾坐标来量化脚跟撞击(HS)和脚趾脱落(TO)事件的次数。它是基于对相关局部三维位置信号的粗略估计。采用原始的分段线性拟合方法对这些局部信号进行拟合,在不需要使用任意实验系数的情况下准确地识别出HS和to时间。我们用9名健康受试者共322项试验验证了所提出的方法。将提取的时间步态事件与从测力板获得的参考数据进行比较。与力阈值为10 n的参考数据相比,HS和TO的时间精度分别为0.3 ms±7.1 ms和-2.8 ms±7.2 ms,提高了HS和TO的检测精度。此外,它可以用来执行跨步分析在地面上行走时,只有记录脚跟和脚趾坐标。
{"title":"Development and validation of a 3D kinematic-based method for determining gait events during overground walking","authors":"M. Boutaayamou, C. Schwartz, V. Denoël, B. Forthomme, J. Croisier, G. Garraux, J. Verly, O. Brüls","doi":"10.1109/IC3D.2014.7032604","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032604","url":null,"abstract":"A new signal processing algorithm is developed for quantifying heel strike (HS) and toe-off (TO) event times solely from measured heel and toe coordinates during overground walking. It is based on a rough estimation of relevant local 3D position signals. An original piecewise linear fitting method is applied to these local signals to accurately identify HS and TO times without the need of using arbitrary experimental coefficients. We validated the proposed method with nine healthy subjects and a total of 322 trials. The extracted temporal gait events were compared to reference data obtained from a force plate. HS and TO times were identified with a temporal accuracy ± precision of 0.3 ms ± 7.1 ms, and -2.8 ms ± 7.2 ms in comparison with reference data defined with a force threshold of 10 N. This algorithm improves the accuracy of the HS and TO detection. Furthermore, it can be used to perform stride-by-stride analysis during overground walking with only recorded heel and toe coordinates.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125290348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards automatic stereo pair extraction for 3D visualisation of historical aerial photographs 面向历史航空照片三维可视化的自动立体对提取
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032575
A. Hast, Andrea Marchetti
An efficient and almost automatic method for stereo pair extraction of aerial photos is proposed. There are several challenging problems that needs to be taken into consideration when creating stereo pairs from historical aerial photos. These problems are discussed and solutions are proposed in order to obtain an almost automatic procedure with as little input as possible needed from the user. The result is a rectified and illumination corrected stereo pair. It will be discussed why viewing aerial photos in stereo is important since the depth cue gives more information than single photos do.
提出了一种高效、几乎自动化的航空照片立体对提取方法。在从历史航拍照片中创建立体对时,需要考虑几个具有挑战性的问题。本文对这些问题进行了讨论,并提出了解决方案,以期在尽可能少的用户输入的情况下实现几乎自动化的过程。结果是一个整流和照明校正立体对。我们将讨论为什么观看立体航拍照片很重要,因为深度提示比单张照片提供了更多的信息。
{"title":"Towards automatic stereo pair extraction for 3D visualisation of historical aerial photographs","authors":"A. Hast, Andrea Marchetti","doi":"10.1109/IC3D.2014.7032575","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032575","url":null,"abstract":"An efficient and almost automatic method for stereo pair extraction of aerial photos is proposed. There are several challenging problems that needs to be taken into consideration when creating stereo pairs from historical aerial photos. These problems are discussed and solutions are proposed in order to obtain an almost automatic procedure with as little input as possible needed from the user. The result is a rectified and illumination corrected stereo pair. It will be discussed why viewing aerial photos in stereo is important since the depth cue gives more information than single photos do.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131356818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An efficient depth estimation using temporal 3D-Warping 一个有效的深度估计使用时间三维翘曲
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032586
S. H. Kumar, K. Suraj, K. Ramakrishnan
This paper presents a computationally efficient method for estimation of high-quality depth for multiview video acquired by a camera array in motion. Depth information is essential for 3DTV display system for generating video streams from virtual view points. Dense depth estimation has been successfully modeled as a Markov Random Field, and several methods like Iterated Conditional Modes, Graph Cuts and Belief Propagation have been proposed to solve the same. While depth estimation using Graph Cuts and Belief Propagation give accurate results, their computational requirements are high. On the other hand, Iterated Conditional Modes is fast, but the quality of the result is poor. In our work, we propose a technique for boosting the quality of the resultant depth estimated using Iterated Conditional Modes to near Graph Cuts or Belief Propagation levels while keeping the computational cost low.
本文提出了一种计算效率高的多视点视频深度估计方法。深度信息是三维电视显示系统从虚拟视点生成视频流的必要信息。密集深度估计已经成功地建模为一个马尔可夫随机场,并提出了迭代条件模式、图切割和信念传播等几种方法来解决该问题。虽然使用图割和信念传播的深度估计结果准确,但它们的计算要求很高。另一方面,迭代条件模式速度快,但结果质量差。在我们的工作中,我们提出了一种技术,用于提高使用迭代条件模式估计的结果深度的质量,以接近图切割或信念传播水平,同时保持较低的计算成本。
{"title":"An efficient depth estimation using temporal 3D-Warping","authors":"S. H. Kumar, K. Suraj, K. Ramakrishnan","doi":"10.1109/IC3D.2014.7032586","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032586","url":null,"abstract":"This paper presents a computationally efficient method for estimation of high-quality depth for multiview video acquired by a camera array in motion. Depth information is essential for 3DTV display system for generating video streams from virtual view points. Dense depth estimation has been successfully modeled as a Markov Random Field, and several methods like Iterated Conditional Modes, Graph Cuts and Belief Propagation have been proposed to solve the same. While depth estimation using Graph Cuts and Belief Propagation give accurate results, their computational requirements are high. On the other hand, Iterated Conditional Modes is fast, but the quality of the result is poor. In our work, we propose a technique for boosting the quality of the resultant depth estimated using Iterated Conditional Modes to near Graph Cuts or Belief Propagation levels while keeping the computational cost low.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121086243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A simple solution for the non perspective three point pose problem 非透视三点位姿问题的简单解决方法
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032594
Mohamed H. Merzban, M. Abdellatif, A. Abouelsoud, ahmed. ali
Non-Perspective Three Point Pose (NP3P) problem is a generalization of the classical three point pose problem for the case of multi-camera that have no common projection center. In this paper, we develop a simple, minimal algebraic solution to the NP3P problem where the projection rays of three points may have arbitrary but known directions. This problem is known to have a maximum of eight solutions. The problem is formulated mathematically as the solution of three multivariate polynomials. The technique of Sylvester matrix resultants of two equations is used to obtain an eighth order polynomial that can be solved to yield the pose parameters. The accuracy and computational cost of the new method is compared to other methods reported in the literature and it was found to have comparable accuracy with less computational cost.
非透视三点位姿(NP3P)问题是多摄像机无共同投影中心情况下经典三点位姿问题的推广。本文给出了NP3P问题的一个简单的最小代数解,其中三点的投影射线可能具有任意但已知的方向。这个问题已知最多有八个解。这个问题在数学上被表述为三个多元多项式的解。利用两个方程的Sylvester矩阵结果法得到一个可求解的八阶多项式,得到位姿参数。将新方法的精度和计算成本与文献中报道的其他方法进行了比较,发现它具有相当的精度和更少的计算成本。
{"title":"A simple solution for the non perspective three point pose problem","authors":"Mohamed H. Merzban, M. Abdellatif, A. Abouelsoud, ahmed. ali","doi":"10.1109/IC3D.2014.7032594","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032594","url":null,"abstract":"Non-Perspective Three Point Pose (NP3P) problem is a generalization of the classical three point pose problem for the case of multi-camera that have no common projection center. In this paper, we develop a simple, minimal algebraic solution to the NP3P problem where the projection rays of three points may have arbitrary but known directions. This problem is known to have a maximum of eight solutions. The problem is formulated mathematically as the solution of three multivariate polynomials. The technique of Sylvester matrix resultants of two equations is used to obtain an eighth order polynomial that can be solved to yield the pose parameters. The accuracy and computational cost of the new method is compared to other methods reported in the literature and it was found to have comparable accuracy with less computational cost.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131347518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Floating display screen formed by AIRR (Aerial imaging by retro-reflection) for interaction in 3D space 由AIRR(航拍反射成像)形成的浮动显示屏,在三维空间中进行交互
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032590
Hirotsugu Yamamoto, M. Yasui, M. S. Alvissalim, Masashi Takahashi, Yuka Tomiyama, S. Suyama, M. Ishikawa
This paper presents an interaction system with a floating display screen. Aerial imaging by retro-reflection (AIRR) forms an aerial LED screen that is floating over tabletop and is visible well over 120 degrees. In order to reduce latency, our system employs a high-frame-rate LED display and high-speed stereoscopic cameras. Developed system enables users to interact with aerially displayed information spontaneously.
本文提出了一种带有浮动显示屏的交互系统。通过反向反射(AIRR)的空中成像形成一个空中LED屏幕,漂浮在桌面上,超过120度可见。为了减少延迟,我们的系统采用了高帧率LED显示屏和高速立体摄像头。开发的系统使用户能够自发地与空中显示的信息进行交互。
{"title":"Floating display screen formed by AIRR (Aerial imaging by retro-reflection) for interaction in 3D space","authors":"Hirotsugu Yamamoto, M. Yasui, M. S. Alvissalim, Masashi Takahashi, Yuka Tomiyama, S. Suyama, M. Ishikawa","doi":"10.1109/IC3D.2014.7032590","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032590","url":null,"abstract":"This paper presents an interaction system with a floating display screen. Aerial imaging by retro-reflection (AIRR) forms an aerial LED screen that is floating over tabletop and is visible well over 120 degrees. In order to reduce latency, our system employs a high-frame-rate LED display and high-speed stereoscopic cameras. Developed system enables users to interact with aerially displayed information spontaneously.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131311513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Real-time tracking with an embedded 3D camera with FPGA processing 采用FPGA处理的嵌入式三维摄像机实时跟踪
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032593
A. Muscoloni, S. Mattoccia
People tracking is a crucial component of many intelligent video surveillance systems and recent developments in embedded computing architectures and algorithms allow us to design compact, lightweight and energy efficient systems aimed at tackling this problem. In particular, the advent of cheap RGBD sensing devices enables to exploit depth information as additional cue. In this paper we propose a 3D tracking system aimed to become the basic node of a distributed system for business analytics applications. In the envisioned distributed system each node would consist of a custom stereo camera with on-board FPGA processing coupled with a compact CPU based board. In the basic node proposed in this paper, aimed at raw people tracking within the sensed area of a single device, the custom stereo camera delivers, in real time and with minimal energy requirements, accurate dense depth maps according to state-of-the-art computer vision algorithms. Then, the CPU based system, by processing this information enables reliable 3D people tracking. In our system, deploying the FPGA front-end, the main constraint for real time 3D tracking is concerned with the computing requirement of the CPU based board and, in this paper, we propose a fast and effective node for 3D people tracking algorithm suited for implementation on embedded devices.
人员跟踪是许多智能视频监控系统的重要组成部分,嵌入式计算架构和算法的最新发展使我们能够设计出紧凑、轻便和节能的系统,旨在解决这一问题。特别是,廉价的RGBD传感设备的出现使得利用深度信息作为额外的线索成为可能。在本文中,我们提出了一个3D跟踪系统,旨在成为分布式业务分析应用系统的基本节点。在设想的分布式系统中,每个节点将由一个定制的立体相机组成,该相机带有板上FPGA处理以及一个紧凑的基于CPU的板。在本文提出的基本节点中,针对单个设备感测区域内的原始人员进行跟踪,定制立体摄像头根据最先进的计算机视觉算法,以最小的能量需求实时提供准确的密集深度图。然后,基于CPU的系统,通过处理这些信息实现可靠的3D人物跟踪。在本系统中,基于FPGA前端的实时三维跟踪的主要约束是基于CPU的主板的计算需求,本文提出了一种适合于嵌入式设备实现的快速有效的三维人跟踪节点算法。
{"title":"Real-time tracking with an embedded 3D camera with FPGA processing","authors":"A. Muscoloni, S. Mattoccia","doi":"10.1109/IC3D.2014.7032593","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032593","url":null,"abstract":"People tracking is a crucial component of many intelligent video surveillance systems and recent developments in embedded computing architectures and algorithms allow us to design compact, lightweight and energy efficient systems aimed at tackling this problem. In particular, the advent of cheap RGBD sensing devices enables to exploit depth information as additional cue. In this paper we propose a 3D tracking system aimed to become the basic node of a distributed system for business analytics applications. In the envisioned distributed system each node would consist of a custom stereo camera with on-board FPGA processing coupled with a compact CPU based board. In the basic node proposed in this paper, aimed at raw people tracking within the sensed area of a single device, the custom stereo camera delivers, in real time and with minimal energy requirements, accurate dense depth maps according to state-of-the-art computer vision algorithms. Then, the CPU based system, by processing this information enables reliable 3D people tracking. In our system, deploying the FPGA front-end, the main constraint for real time 3D tracking is concerned with the computing requirement of the CPU based board and, in this paper, we propose a fast and effective node for 3D people tracking algorithm suited for implementation on embedded devices.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125813242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Visibility-driven patch group generation 可见性驱动的补丁组生成
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032597
S. Ebel, W. Waizenegger, M. Reinhardt, O. Schreer, I. Feldmann
The target application of this paper is 3D scene reconstruction for future real-time production scenarios in the broadcast domain as well as future post-production and on-set visual effect previews in the digital cinema area. Our approach is based on multiple trifocal camera capture systems which can be arbitrarily distributed on set. In this work we tackle the problem of multi-view data fusion from a real-time perspective. The novelty of our work is that instead of performing a pixel wise processing we consider patch groups as higher level scene representations. Based on the robust results of the trifocal sub-systems we implicitly obtain an optimized set of patch groups even for partly occluded regions by the application of a simple geometric rule set. Further on, we show that a simplified meshing can be applied to the patch group borders which enables a GPU centric real-time implementation. The presented algorithm is tested on real world test shoot data for the case of 3D reconstruction of humans.
本文的目标应用是面向未来广播领域实时制作场景的3D场景重建,以及未来数字影院领域的后期制作和现场视觉效果预览。我们的方法是基于多个三焦相机捕获系统,这些系统可以在集合上任意分布。在这项工作中,我们从实时的角度解决了多视图数据融合问题。我们工作的新颖之处在于,我们将补丁组视为更高级别的场景表示,而不是执行像素明智的处理。基于三焦子系统的鲁棒性结果,我们通过应用简单的几何规则集隐式地获得了部分遮挡区域的最优补丁群集。进一步,我们表明可以将简化的网格划分应用于补丁组边界,从而实现以GPU为中心的实时实现。以人体三维重建为例,对该算法进行了实景测试。
{"title":"Visibility-driven patch group generation","authors":"S. Ebel, W. Waizenegger, M. Reinhardt, O. Schreer, I. Feldmann","doi":"10.1109/IC3D.2014.7032597","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032597","url":null,"abstract":"The target application of this paper is 3D scene reconstruction for future real-time production scenarios in the broadcast domain as well as future post-production and on-set visual effect previews in the digital cinema area. Our approach is based on multiple trifocal camera capture systems which can be arbitrarily distributed on set. In this work we tackle the problem of multi-view data fusion from a real-time perspective. The novelty of our work is that instead of performing a pixel wise processing we consider patch groups as higher level scene representations. Based on the robust results of the trifocal sub-systems we implicitly obtain an optimized set of patch groups even for partly occluded regions by the application of a simple geometric rule set. Further on, we show that a simplified meshing can be applied to the patch group borders which enables a GPU centric real-time implementation. The presented algorithm is tested on real world test shoot data for the case of 3D reconstruction of humans.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126005198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Detection of 3D position of eyes through a consumer RGB-D camera for stereoscopic mixed reality environments 通过消费者RGB-D相机检测眼睛的三维位置,用于立体混合现实环境
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032592
Manuela Chessa, Matteo Garibotti, Guido Maiello, Lorenzo Caroggio, Huayi Huang, S. Sabatini, F. Solari
A novel approach to track the 3D position of the user's eyes in stereoscopic virtual environments, where stereo glasses are worn, is proposed. Such an approach improves a state-of-the-art real-time face tracking algorithm by addressing the occlusion due the stereo glasses and providing estimation of eye position based on biométrie features. More generally, our solution can be seen as a proof of concept for a more robust approach to improving motion tracking techniques. In particular, the proposed technique yields accurate and stable estimates of the 3D position of the user's eyes, while the user moves in front of the stereoscopic display. The correct tracking of both eyes' 3D position is a crucial step in order to achieve a more natural human-computer interaction which diminishes visual fatigue. The proposed approach is validated through quantitative tests: (i) we assessed the accuracy of our algorithm for tracking the 3D position of users' eyes with and without stereo glasses; (ii) we have performed a perceptual assessment of the natural interaction in the virtual environments through experimental sessions with several users.
提出了一种在佩戴立体眼镜的立体虚拟环境中跟踪用户眼睛三维位置的新方法。该方法解决了立体眼镜遮挡的问题,并基于生物质变特征对眼睛位置进行估计,从而改进了当前最先进的实时人脸跟踪算法。更一般地说,我们的解决方案可以被视为一个更强大的方法来改进运动跟踪技术的概念证明。特别是,当用户在立体显示器前移动时,所提出的技术可以准确而稳定地估计用户眼睛的3D位置。为了实现更自然的人机交互,减少视觉疲劳,正确跟踪双眼的3D位置是至关重要的一步。通过定量测试验证了所提出的方法:(i)我们评估了我们的算法在跟踪用户眼睛的3D位置时的准确性,无论是否戴立体眼镜;(ii)我们通过与几个用户的实验会话,对虚拟环境中的自然交互进行了感知评估。
{"title":"Detection of 3D position of eyes through a consumer RGB-D camera for stereoscopic mixed reality environments","authors":"Manuela Chessa, Matteo Garibotti, Guido Maiello, Lorenzo Caroggio, Huayi Huang, S. Sabatini, F. Solari","doi":"10.1109/IC3D.2014.7032592","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032592","url":null,"abstract":"A novel approach to track the 3D position of the user's eyes in stereoscopic virtual environments, where stereo glasses are worn, is proposed. Such an approach improves a state-of-the-art real-time face tracking algorithm by addressing the occlusion due the stereo glasses and providing estimation of eye position based on biométrie features. More generally, our solution can be seen as a proof of concept for a more robust approach to improving motion tracking techniques. In particular, the proposed technique yields accurate and stable estimates of the 3D position of the user's eyes, while the user moves in front of the stereoscopic display. The correct tracking of both eyes' 3D position is a crucial step in order to achieve a more natural human-computer interaction which diminishes visual fatigue. The proposed approach is validated through quantitative tests: (i) we assessed the accuracy of our algorithm for tracking the 3D position of users' eyes with and without stereo glasses; (ii) we have performed a perceptual assessment of the natural interaction in the virtual environments through experimental sessions with several users.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129801723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Validation of subpixel area based simulation for autostereoscopic displays with parallax barriers 基于亚像素区域的视差屏障自动立体显示器仿真验证
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032574
R. Bartmann, Mathias Kuhlmey, Ronny Netzbandt, R. Barré
Ideal autostereoscopic display designs show a symmetrical light intensity distribution in the viewer's space. Real displays however always show some inhomogeneities. We investigated and simulated an autostereoscopic display with motion parallax barrier. Thereby our newly developed subpixel area model (SAM) served as a basis for barrier and intensity dependent viewing zone calculations. We introduce the implementation of our SAM approach for the simulation of the luminance and the content distribution in viewing distance. Furthermore, a special misalignment of the optical image splitter has been simulated and metrologically compared with the effect of a similar error in an assembled autostereoscopic display. In detail, the truncated shape of the luminous subpixel area under the image splitter, the misalignment and its result have been mathematically described.
理想的自立体显示设计在观看者的空间中呈现对称的光强分布。然而,真实的显示总是显示出一些不均匀性。研究并模拟了一种具有运动视差屏障的自动立体显示器。因此,我们新开发的亚像素区域模型(SAM)作为基于屏障和强度的视区计算的基础。我们介绍了我们的SAM方法的实现,用于模拟在观看距离上的亮度和内容分布。此外,还对光学分光镜的一种特殊误差进行了模拟,并与装配的自立体显示器中类似误差的影响进行了计量比较。详细地描述了分路器下发光亚像素区域的截断形状、不对准及其结果。
{"title":"Validation of subpixel area based simulation for autostereoscopic displays with parallax barriers","authors":"R. Bartmann, Mathias Kuhlmey, Ronny Netzbandt, R. Barré","doi":"10.1109/IC3D.2014.7032574","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032574","url":null,"abstract":"Ideal autostereoscopic display designs show a symmetrical light intensity distribution in the viewer's space. Real displays however always show some inhomogeneities. We investigated and simulated an autostereoscopic display with motion parallax barrier. Thereby our newly developed subpixel area model (SAM) served as a basis for barrier and intensity dependent viewing zone calculations. We introduce the implementation of our SAM approach for the simulation of the luminance and the content distribution in viewing distance. Furthermore, a special misalignment of the optical image splitter has been simulated and metrologically compared with the effect of a similar error in an assembled autostereoscopic display. In detail, the truncated shape of the luminous subpixel area under the image splitter, the misalignment and its result have been mathematically described.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"46 Suppl 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131508712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Interaction between size and disparity cues in distance judgements 距离判断中大小和差异线索的相互作用
Pub Date : 2014-12-01 DOI: 10.1109/IC3D.2014.7032587
Paul Hands, A. Khushu, J. Read
The human visual system has the ability to use the size of familiar objects as a cue to the object's depth in the world. With the advancement of Stereoscopic 3D (S3D) displays, objects can now be displayed with differing size and binocular disparity cues to the depth of the object. We tested, for absolute and relative disparity cues, whether the familiar size or disparity cue was the preferred indication of depth. We found that, when only absolute disparity cues are available, the retinal size of a familiar object has a significant effect on its perceived depth, but with relative disparity the binocular disparity was a strong enough cue to depth that size was not a significant cue in determining the depth of the familiar object.
人类的视觉系统有能力利用熟悉物体的大小作为物体在世界中的深度的线索。随着立体3D (S3D)显示器的进步,物体现在可以以不同的尺寸和物体深度的双眼视差提示来显示。我们测试了,对于绝对和相对差距线索,是否熟悉的大小或差距线索是深度的首选指示。我们发现,当只有绝对视差提示时,熟悉物体的视网膜大小对其感知深度有显著影响,但对于相对视差,双眼视差是一个足够强的深度提示,而大小在确定熟悉物体的深度时不是一个显著的提示。
{"title":"Interaction between size and disparity cues in distance judgements","authors":"Paul Hands, A. Khushu, J. Read","doi":"10.1109/IC3D.2014.7032587","DOIUrl":"https://doi.org/10.1109/IC3D.2014.7032587","url":null,"abstract":"The human visual system has the ability to use the size of familiar objects as a cue to the object's depth in the world. With the advancement of Stereoscopic 3D (S3D) displays, objects can now be displayed with differing size and binocular disparity cues to the depth of the object. We tested, for absolute and relative disparity cues, whether the familiar size or disparity cue was the preferred indication of depth. We found that, when only absolute disparity cues are available, the retinal size of a familiar object has a significant effect on its perceived depth, but with relative disparity the binocular disparity was a strong enough cue to depth that size was not a significant cue in determining the depth of the familiar object.","PeriodicalId":244221,"journal":{"name":"2014 International Conference on 3D Imaging (IC3D)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133746536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2014 International Conference on 3D Imaging (IC3D)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1