Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584390
Vojislav Lukic, S. Vujic, A. Makarov, Zdravko Popovic
This paper presents a system and an algorithm for the estimation of translational speed of moving objects. The speed estimation is based on stereoscopic analysis of displacements of a region of interest on a moving object. The system is applied to vehicle speed measurement. A legacy automatic license plate recognition (ALPR) algorithm is used for license plate segmentation. The license plate is used as region of interest (RoI) and is tracked throughout a video stream. The RoI displacement and the elapsed time are measured in order to provide the vehicle average speed estimate. The system calibration is described in detail, as well as its simplifying effect on displacement measurement. The analysis of possible errors and artifacts is given and illustrated by experimental results.
{"title":"Stereoscopic vehicle speed measurement - System calibration and synchronization errors analysis","authors":"Vojislav Lukic, S. Vujic, A. Makarov, Zdravko Popovic","doi":"10.1109/IC3D.2011.6584390","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584390","url":null,"abstract":"This paper presents a system and an algorithm for the estimation of translational speed of moving objects. The speed estimation is based on stereoscopic analysis of displacements of a region of interest on a moving object. The system is applied to vehicle speed measurement. A legacy automatic license plate recognition (ALPR) algorithm is used for license plate segmentation. The license plate is used as region of interest (RoI) and is tracked throughout a video stream. The RoI displacement and the elapsed time are measured in order to provide the vehicle average speed estimate. The system calibration is described in detail, as well as its simplifying effect on displacement measurement. The analysis of possible errors and artifacts is given and illustrated by experimental results.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116878025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584391
Mirko Schmidt, B. Jähne
3D Time-of-Flight (ToF) cameras are capable to acquire dense depth maps of a scene by determining the time it takes for light to travel from a source to an object and back to the camera. Determining the depth requires multiple measurements. Current ToF system are not able to acquire all these measurements simultaneously. If the observed scene is changing during the acquisition of data for computation of a single depth map, the reconstructed values are erroneous. Such errors are known as motion artifacts. This work investigates the causes leading to motion artifacts and proposes a method which significantly reduces this kind of errors. This is done by analyzing the temporal raw data signal of individual pixels, leading to a possibility for identification and correction of affected raw data values. Using a commercial ToF system the method is demonstrated. The proposed algorithms can be implemented in a computationally very efficient way. Thus they can be applied in real-time, even on systems with limited computational resources (e. g. embedded systems).
{"title":"Efficient and robust reduction of motion artifacts for 3D Time-of-Flight cameras","authors":"Mirko Schmidt, B. Jähne","doi":"10.1109/IC3D.2011.6584391","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584391","url":null,"abstract":"3D Time-of-Flight (ToF) cameras are capable to acquire dense depth maps of a scene by determining the time it takes for light to travel from a source to an object and back to the camera. Determining the depth requires multiple measurements. Current ToF system are not able to acquire all these measurements simultaneously. If the observed scene is changing during the acquisition of data for computation of a single depth map, the reconstructed values are erroneous. Such errors are known as motion artifacts. This work investigates the causes leading to motion artifacts and proposes a method which significantly reduces this kind of errors. This is done by analyzing the temporal raw data signal of individual pixels, leading to a possibility for identification and correction of affected raw data values. Using a commercial ToF system the method is demonstrated. The proposed algorithms can be implemented in a computationally very efficient way. Thus they can be applied in real-time, even on systems with limited computational resources (e. g. embedded systems).","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126505786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/IC3D.2011.6584394
David Tingdahl, David De Weerdt, M. Vergauwen, L. Gool
We present WEAR++, a wearable augmented reality system consisting of a head mounted display, a camera and an inertial measurement unit. This paper focuses on the visual camera tracking system developed for WEAR++. Using a 3D model of the scene, we first create a map of 3D-2D correspondences in an off line mapping procedure. During on line operation, we match features from a new image to the database, and track the camera pose with an Extended Kalman Filter using the recovered 3D-2D correspondences. By using robust local features (SURF) and a frustum culling algorithm, we demonstrate that we are able to track the pose even for jerky motions and blurry images. Furthermore, we explain how the system was utilised by astronaut Frank De Winne on board the International Space Station for performing maintenance tasks.
我们提出了wear++,一个可穿戴的增强现实系统,由一个头戴式显示器、一个摄像头和一个惯性测量单元组成。本文主要研究基于wear++开发的视觉摄像机跟踪系统。使用场景的3D模型,我们首先在离线映射程序中创建3D- 2d对应的地图。在在线操作中,我们将新图像的特征与数据库进行匹配,并使用恢复的3D-2D对应关系使用扩展卡尔曼滤波器跟踪相机姿态。通过使用鲁棒局部特征(SURF)和截锥体剔除算法,我们证明了我们能够跟踪姿势,即使是突然的运动和模糊的图像。此外,我们还解释了宇航员Frank De Winne如何在国际空间站上使用该系统执行维护任务。
{"title":"WEAR++: 3D model driven camera tracking on board the International Space Station","authors":"David Tingdahl, David De Weerdt, M. Vergauwen, L. Gool","doi":"10.1109/IC3D.2011.6584394","DOIUrl":"https://doi.org/10.1109/IC3D.2011.6584394","url":null,"abstract":"We present WEAR++, a wearable augmented reality system consisting of a head mounted display, a camera and an inertial measurement unit. This paper focuses on the visual camera tracking system developed for WEAR++. Using a 3D model of the scene, we first create a map of 3D-2D correspondences in an off line mapping procedure. During on line operation, we match features from a new image to the database, and track the camera pose with an Extended Kalman Filter using the recovered 3D-2D correspondences. By using robust local features (SURF) and a frustum culling algorithm, we demonstrate that we are able to track the pose even for jerky motions and blurry images. Furthermore, we explain how the system was utilised by astronaut Frank De Winne on board the International Space Station for performing maintenance tasks.","PeriodicalId":395174,"journal":{"name":"2011 International Conference on 3D Imaging (IC3D)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127194787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}