首页 > 最新文献

2014 Canadian Conference on Computer and Robot Vision最新文献

英文 中文
Building Better Formlet Codes for Planar Shape 为平面形状构建更好的模板代码
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.19
A. Yakubovich, J. Elder
The GRID/formlet representation of planar shape has a number of nice properties [4], [10], [3], but there are also limitations: it is slow to converge for shapes with elongated parts, and it can be sensitive to parameterization as well as grossly ill-conditioned. Here we describe a number of innovations on the GRID/formlet model that address these problems: 1) By generalizing the formlet basis to include oriented deformations we achieve faster convergence for elongated parts. 2) By introducing a modest regularizing term that penalizes the total energy of each deformation we limit redundancy in formlet parameters and improve identifiability of the model. 3) By applying a recent contour remapping method [9] we eliminate problems due to drift of the model parameterization during matching pursuit. These innovations are shown to both speed convergence and to improve performance on a shape completion task.
平面形状的GRID/formlet表示具有许多很好的特性[4],[10],[3],但也存在局限性:对于具有细长部分的形状,它收敛速度很慢,并且对参数化和严重病态很敏感。在这里,我们描述了解决这些问题的网格/模板模型上的一些创新:1)通过将模板基础推广到包括定向变形,我们实现了细长零件的更快收敛。2)通过引入一个适度的正则化项来惩罚每个变形的总能量,我们限制了形式参数的冗余,提高了模型的可识别性。3)通过应用一种最新的轮廓重映射方法[9],我们消除了匹配追踪过程中由于模型参数化漂移造成的问题。这些创新既加快了收敛速度,又提高了形状完成任务的性能。
{"title":"Building Better Formlet Codes for Planar Shape","authors":"A. Yakubovich, J. Elder","doi":"10.1109/CRV.2014.19","DOIUrl":"https://doi.org/10.1109/CRV.2014.19","url":null,"abstract":"The GRID/formlet representation of planar shape has a number of nice properties [4], [10], [3], but there are also limitations: it is slow to converge for shapes with elongated parts, and it can be sensitive to parameterization as well as grossly ill-conditioned. Here we describe a number of innovations on the GRID/formlet model that address these problems: 1) By generalizing the formlet basis to include oriented deformations we achieve faster convergence for elongated parts. 2) By introducing a modest regularizing term that penalizes the total energy of each deformation we limit redundancy in formlet parameters and improve identifiability of the model. 3) By applying a recent contour remapping method [9] we eliminate problems due to drift of the model parameterization during matching pursuit. These innovations are shown to both speed convergence and to improve performance on a shape completion task.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115971486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Adaptive Robotic Contour Following from Low Accuracy RGB-D Surface Profiling and Visual Servoing 基于低精度RGB-D曲面轮廓和视觉伺服的自适应机器人轮廓跟踪
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.15
D. Nakhaeinia, P. Payeur, R. Laganière
This paper introduces an adaptive contour following method for robot manipulators that originally combines low accuracy RGB-D sensing with eye-in-hand visual servoing. The main objective is to allow for the detection and following of freely shaped 3D object contours under visual guidance that is initially provided by a fixed Kinect sensor and refined by a single eye-in-hand camera. A path planning algorithm is developed that constrains the end effector to maintain close proximity to the surface of the object while following its contour. To achieve this goal, a RGB-D sensing is used to rapidly acquire information about the 3D location and profile of an object. However, because of the low resolution and noisy information provided by such sensors, accurate contour following is achieved with an extra eye-in-hand camera that is mounted on the robot's end-effector to locally refine the contour definition and to plan an accurate trajectory for the robot., Experiments carried out with a 7-DOF manipulator and the dual sensory stage are reported to validate the reliability of the proposed contour following method.
介绍了一种将低精度RGB-D传感与眼手视觉伺服相结合的机器人机械手自适应轮廓跟踪方法。其主要目标是允许在视觉指导下检测和跟踪自由形状的3D物体轮廓,该视觉指导最初由固定的Kinect传感器提供,并通过单个眼在手相机进行完善。提出了一种约束末端执行器在遵循物体轮廓的同时保持接近物体表面的路径规划算法。为了实现这一目标,RGB-D传感被用于快速获取物体的三维位置和轮廓信息。然而,由于此类传感器提供的信息分辨率较低且带有噪声,因此需要在机器人末端执行器上安装一个额外的眼手摄像头来实现精确的轮廓跟踪,以局部细化轮廓定义并为机器人规划准确的轨迹。在7自由度机械臂和双感觉平台上进行了实验,验证了所提轮廓跟踪方法的可靠性。
{"title":"Adaptive Robotic Contour Following from Low Accuracy RGB-D Surface Profiling and Visual Servoing","authors":"D. Nakhaeinia, P. Payeur, R. Laganière","doi":"10.1109/CRV.2014.15","DOIUrl":"https://doi.org/10.1109/CRV.2014.15","url":null,"abstract":"This paper introduces an adaptive contour following method for robot manipulators that originally combines low accuracy RGB-D sensing with eye-in-hand visual servoing. The main objective is to allow for the detection and following of freely shaped 3D object contours under visual guidance that is initially provided by a fixed Kinect sensor and refined by a single eye-in-hand camera. A path planning algorithm is developed that constrains the end effector to maintain close proximity to the surface of the object while following its contour. To achieve this goal, a RGB-D sensing is used to rapidly acquire information about the 3D location and profile of an object. However, because of the low resolution and noisy information provided by such sensors, accurate contour following is achieved with an extra eye-in-hand camera that is mounted on the robot's end-effector to locally refine the contour definition and to plan an accurate trajectory for the robot., Experiments carried out with a 7-DOF manipulator and the dual sensory stage are reported to validate the reliability of the proposed contour following method.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121224433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Vision-Based Qualitative Path-Following Control of Quadrotor Aerial Vehicle with Speeded-Up Robust Features 具有加速鲁棒特性的四旋翼飞行器视觉定性路径跟踪控制
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.50
Trung Nguyen, G. Mann, R. Gosine
This paper describes a vision-based 3D navigation technique for path-following control of Quad rotor Aerial Visual-Teach-and-Repeat system. The navigation method is developed on Funnel Lane theory, which defines possible positions to fly straight. The navigation calculation utilizes the reference images and features to compute the desired heading angle and height during path following. The type of feature is Speeded-Up Robust Features (SURF). The tracking feature method between images is performed by matching SURF feature's descriptors. The Quad rotor is able to independently perform path following in indoor environment without the support of an external tracking system. Simulation is conducted on Robot Operating System (ROS) and Gazebo simulator. The application of the proposed method is visual-homing and visual-servoing in GPS-denied environment.
介绍了一种基于视觉的四旋翼空中视教重复系统路径跟踪控制的三维导航技术。在漏斗道理论的基础上提出了直线飞行的可能位置。导航计算利用参考图像和特征来计算路径跟随时所需的航向角和高度。特征的类型是加速鲁棒特征(SURF)。图像之间的特征跟踪方法是通过匹配SURF特征描述符来实现的。该四轴转子能够在室内环境下独立进行路径跟踪,无需外部跟踪系统的支持。在机器人操作系统(ROS)和Gazebo模拟器上进行了仿真。该方法应用于无gps环境下的视觉导引和视觉伺服。
{"title":"Vision-Based Qualitative Path-Following Control of Quadrotor Aerial Vehicle with Speeded-Up Robust Features","authors":"Trung Nguyen, G. Mann, R. Gosine","doi":"10.1109/CRV.2014.50","DOIUrl":"https://doi.org/10.1109/CRV.2014.50","url":null,"abstract":"This paper describes a vision-based 3D navigation technique for path-following control of Quad rotor Aerial Visual-Teach-and-Repeat system. The navigation method is developed on Funnel Lane theory, which defines possible positions to fly straight. The navigation calculation utilizes the reference images and features to compute the desired heading angle and height during path following. The type of feature is Speeded-Up Robust Features (SURF). The tracking feature method between images is performed by matching SURF feature's descriptors. The Quad rotor is able to independently perform path following in indoor environment without the support of an external tracking system. Simulation is conducted on Robot Operating System (ROS) and Gazebo simulator. The application of the proposed method is visual-homing and visual-servoing in GPS-denied environment.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124101540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Meta-Technique for Increasing Density of Local Stereo Methods through Iterative Interpolation and Warping 一种通过迭代插值和翘曲提高局部立体方法密度的元技术
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.59
A. Murarka, Nils Einecke
Despite much progress in global methods for computing depth from pairs of stereo images, local block matching methods are still immensely popular largely due to low computational cost and ease of implementation. However, such methods usually fail to produce valid depths in several image regions due to various reasons such as violations of a fronto-parallel assumption and lack of texture. In this paper, we present a simple and fast meta-technique for increasing the percentage of valid depths (depth map density) for local methods while keeping the percentage of pixels with erroneous depths, low. In the method, the original disparity map computed by a local stereo method is iteratively improved through a process of depth interpolation and image warping based on the interpolated depth. Image warping gives a mechanism for testing the validity of the interpolated depths allowing for incorrect depths to be discarded. Our results on the KITTI stereo data set demonstrate that, on average, we can increase density by 7-13% after a single iteration, for a 15-29% increase in computation and only a slight change in the outlier percentage, depending on the cost function used for matching.
尽管从立体图像对计算深度的全局方法取得了很大进展,但局部块匹配方法仍然非常受欢迎,这主要是因为计算成本低且易于实现。然而,由于违反正面平行假设和缺乏纹理等各种原因,这种方法通常无法在多个图像区域产生有效的深度。在本文中,我们提出了一种简单而快速的元技术,用于增加局部方法的有效深度百分比(深度图密度),同时保持低错误深度像素的百分比。该方法通过深度插值和基于插值深度的图像翘曲,对局部立体法计算得到的原始视差图进行迭代改进。图像扭曲提供了一种机制来测试插值深度的有效性,允许不正确的深度被丢弃。我们在KITTI立体数据集上的结果表明,平均而言,在单次迭代后,我们可以将密度提高7-13%,计算量增加15-29%,离群值百分比只有轻微变化,这取决于用于匹配的成本函数。
{"title":"A Meta-Technique for Increasing Density of Local Stereo Methods through Iterative Interpolation and Warping","authors":"A. Murarka, Nils Einecke","doi":"10.1109/CRV.2014.59","DOIUrl":"https://doi.org/10.1109/CRV.2014.59","url":null,"abstract":"Despite much progress in global methods for computing depth from pairs of stereo images, local block matching methods are still immensely popular largely due to low computational cost and ease of implementation. However, such methods usually fail to produce valid depths in several image regions due to various reasons such as violations of a fronto-parallel assumption and lack of texture. In this paper, we present a simple and fast meta-technique for increasing the percentage of valid depths (depth map density) for local methods while keeping the percentage of pixels with erroneous depths, low. In the method, the original disparity map computed by a local stereo method is iteratively improved through a process of depth interpolation and image warping based on the interpolated depth. Image warping gives a mechanism for testing the validity of the interpolated depths allowing for incorrect depths to be discarded. Our results on the KITTI stereo data set demonstrate that, on average, we can increase density by 7-13% after a single iteration, for a 15-29% increase in computation and only a slight change in the outlier percentage, depending on the cost function used for matching.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131223471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D Reconstruction by Fusioning Shadow and Silhouette Information 融合阴影和轮廓信息的三维重建
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.58
Rafik Gouiaa, J. Meunier
In this paper, we propose a new 3D reconstruction method using mainly the shadow and silhouette information of a moving object or person. This method is derived from the well-known Shape From Silhouettes (SFS) approach. A light source can be seen as a camera, which generates an image as a silhouette shadow. Based on this, we propose to replace a multicamera system of SFS by multi-infrared light sources while keeping the same procedure of Visual Hull reconstruction (VH). Therefore, our system consists of infrared light sources and one infrared camera. In this case, in addition to the object silhouette given by the camera, each light source generates an object shadow that reveals the object. Thus, as in SFS, the VH of a given object is reconstructed by intersecting the visual cones. Our method has many advantages compared to SFS and preliminary results, on synthetic and real scene images, showed that the system could be applied in several contexts.
在本文中,我们提出了一种新的三维重建方法,主要利用运动物体或人的阴影和轮廓信息。该方法来源于著名的轮廓形状(SFS)方法。光源可以看作是一个照相机,它产生的图像是一个剪影。在此基础上,我们提出在保持视觉船体重建(VH)过程不变的情况下,用多红外光源代替多摄像机的SFS系统。因此,我们的系统由红外光源和一个红外摄像机组成。在这种情况下,除了相机给出的物体轮廓外,每个光源都会产生一个物体阴影来显示物体。因此,与SFS一样,通过相交视觉锥来重建给定对象的VH。与SFS相比,我们的方法具有许多优点,并且在合成和真实场景图像上的初步结果表明,该系统可以应用于多种环境。
{"title":"3D Reconstruction by Fusioning Shadow and Silhouette Information","authors":"Rafik Gouiaa, J. Meunier","doi":"10.1109/CRV.2014.58","DOIUrl":"https://doi.org/10.1109/CRV.2014.58","url":null,"abstract":"In this paper, we propose a new 3D reconstruction method using mainly the shadow and silhouette information of a moving object or person. This method is derived from the well-known Shape From Silhouettes (SFS) approach. A light source can be seen as a camera, which generates an image as a silhouette shadow. Based on this, we propose to replace a multicamera system of SFS by multi-infrared light sources while keeping the same procedure of Visual Hull reconstruction (VH). Therefore, our system consists of infrared light sources and one infrared camera. In this case, in addition to the object silhouette given by the camera, each light source generates an object shadow that reveals the object. Thus, as in SFS, the VH of a given object is reconstructed by intersecting the visual cones. Our method has many advantages compared to SFS and preliminary results, on synthetic and real scene images, showed that the system could be applied in several contexts.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128494152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Camera Matrix Calibration Using Circular Control Points and Separate Correction of the Geometric Distortion Field 基于圆形控制点和几何畸变场单独校正的摄像机矩阵标定
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.34
Victoria Rudakova, P. Monasse
We achieve a precise camera calibration using circular control points by, first, separation of the lens distortion parameters from other camera parameters and computation of the distortion field in advance by using a calibration harp. Second, in order to compensate for perspective bias, which is prone to occur when using a circled pattern, we incorporate conic affine transformation into the minimization error when estimating the homography, and leave all the other calibration steps as they are used in the literature. Such an error function allows to compensate for the perspective bias. Combined with precise key point detection, the approach is shown to be more stable than current state-of-the-art global calibration method.
首先,通过将镜头畸变参数与其他摄像机参数分离,并利用定标竖琴提前计算畸变场,实现了圆形控制点对摄像机的精确定标。其次,为了补偿在使用圆形模式时容易出现的透视偏差,我们在估计单应性时将二次仿射变换纳入最小误差中,并保留所有其他校准步骤,因为它们在文献中使用。这样的误差函数可以补偿视角偏差。结合精确的关键点检测,该方法比目前最先进的全局校准方法更稳定。
{"title":"Camera Matrix Calibration Using Circular Control Points and Separate Correction of the Geometric Distortion Field","authors":"Victoria Rudakova, P. Monasse","doi":"10.1109/CRV.2014.34","DOIUrl":"https://doi.org/10.1109/CRV.2014.34","url":null,"abstract":"We achieve a precise camera calibration using circular control points by, first, separation of the lens distortion parameters from other camera parameters and computation of the distortion field in advance by using a calibration harp. Second, in order to compensate for perspective bias, which is prone to occur when using a circled pattern, we incorporate conic affine transformation into the minimization error when estimating the homography, and leave all the other calibration steps as they are used in the literature. Such an error function allows to compensate for the perspective bias. Combined with precise key point detection, the approach is shown to be more stable than current state-of-the-art global calibration method.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133240727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Photon Detection and Color Perception at Low Light Levels 弱光下的光子探测和颜色感知
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.45
Mehdi Rezagholizadeh, James J. Clark
Working under low light conditions is of particular interest in machine vision applications such as night vision, tone-mapping techniques, low-light imaging, photography, and surveillance cameras. This work aims at investigating the perception of color at low light situations imposed by physical principles governing photon emission. The impact of the probabilistic nature of photon emission on our color perception becomes more significant at low light levels. In this regard, physical principles are leveraged to develop a framework to take into account the effects of low light level on color vision. Results of this study shows that the normalized spectral power distribution of light changes with light intensity and becomes more uncertain at low light situation as a result of which the uncertainty of color perception increases. Furthermore, a color patch at low light levels give rise to uncertain color measurements whose chromaticities form an elliptic shape inside the chromaticity diagram around the high intensity chromaticity of the color patch. The size of these ellipses is a function of the light intensity and the chromaticity of color patches however the orientation of the ellipses depends only on the patch chromaticity and not on the light level. Moreover, the results of this work indicate that the spectral composition of light is a determining factor in the size and orientation of the ellipses. The elliptic shape of measured samples is a result of the Poisson distribution governing photon emission together with the form of human cone spectral sensitivity functions and can partly explain the elliptic shape of MacAdam ellipses.
在低光条件下工作是机器视觉应用特别感兴趣的,如夜视、色调映射技术、低光成像、摄影和监控摄像机。这项工作旨在研究在控制光子发射的物理原理所施加的弱光情况下的颜色感知。光子发射的概率性质对我们的颜色感知的影响在低光照水平下变得更加显著。在这方面,利用物理原理来开发一个框架,以考虑低光照对色觉的影响。本研究结果表明,光的归一化光谱功率分布随着光强的变化而变化,在弱光情况下变得更加不确定,从而增加了颜色感知的不确定性。此外,在低光水平下的色块会产生不确定的颜色测量,其色度在色块的高强度色度周围的色度图内形成椭圆形。这些椭圆的大小是光强度和色斑的色度的函数,但是椭圆的方向只取决于色斑的色度,而不取决于光照水平。此外,本工作的结果表明,光的光谱组成是决定椭圆大小和方向的一个因素。测量样品的椭圆形状是泊松分布控制光子发射和人锥光谱灵敏度函数形式的结果,可以部分解释MacAdam椭圆的椭圆形状。
{"title":"Photon Detection and Color Perception at Low Light Levels","authors":"Mehdi Rezagholizadeh, James J. Clark","doi":"10.1109/CRV.2014.45","DOIUrl":"https://doi.org/10.1109/CRV.2014.45","url":null,"abstract":"Working under low light conditions is of particular interest in machine vision applications such as night vision, tone-mapping techniques, low-light imaging, photography, and surveillance cameras. This work aims at investigating the perception of color at low light situations imposed by physical principles governing photon emission. The impact of the probabilistic nature of photon emission on our color perception becomes more significant at low light levels. In this regard, physical principles are leveraged to develop a framework to take into account the effects of low light level on color vision. Results of this study shows that the normalized spectral power distribution of light changes with light intensity and becomes more uncertain at low light situation as a result of which the uncertainty of color perception increases. Furthermore, a color patch at low light levels give rise to uncertain color measurements whose chromaticities form an elliptic shape inside the chromaticity diagram around the high intensity chromaticity of the color patch. The size of these ellipses is a function of the light intensity and the chromaticity of color patches however the orientation of the ellipses depends only on the patch chromaticity and not on the light level. Moreover, the results of this work indicate that the spectral composition of light is a determining factor in the size and orientation of the ellipses. The elliptic shape of measured samples is a result of the Poisson distribution governing photon emission together with the form of human cone spectral sensitivity functions and can partly explain the elliptic shape of MacAdam ellipses.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129939389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automated Door Detection with a 3D-Sensor 带有3d传感器的自动门检测
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.44
Sebastian Meyer zu Borgsen, Matthias Schöpfer, Leon Ziegler, S. Wachsmuth
Service robots share the living space of humans. Thus, they should have a similar concept of the environment without having everything labeled beforehand. The detection of closed doors is challenging because they appear with different materials, designs and can even include glass inlays. At the same time their detection is vital in any kind of navigation tasks in domestic environments. A typical 2D object recognition algorithm may not be able to handle the large optical variety of doors. Improvements of low-cost infrared 3D-sensors enable robots to perceive their environment as spatial structure. Therefore we propose a novel door detection algorithm that employs basic structural knowledge about doors and enables to extract parts of doors from point clouds based on constraint region growing. These parts get weighted with Gaussian probabilities and are combined to create an overall probability measure. To show the validity of our approach, a realistic dataset of different doors from different angles and distances was acquired.
服务型机器人分享人类的生活空间。因此,他们应该对环境有类似的概念,而不是事先标记所有内容。关闭的门的检测是具有挑战性的,因为它们看起来有不同的材料,设计,甚至可能包括玻璃镶嵌。同时,对它们的探测在国内任何类型的导航任务中都是至关重要的。典型的二维物体识别算法可能无法处理各种光学门。低成本红外3d传感器的改进使机器人能够将其环境感知为空间结构。因此,我们提出了一种新的门检测算法,该算法利用门的基本结构知识,并基于约束区域增长从点云中提取门的部分。这些部分用高斯概率加权,并组合起来创建一个总体概率度量。为了证明我们方法的有效性,我们从不同角度和距离获取了不同门的真实数据集。
{"title":"Automated Door Detection with a 3D-Sensor","authors":"Sebastian Meyer zu Borgsen, Matthias Schöpfer, Leon Ziegler, S. Wachsmuth","doi":"10.1109/CRV.2014.44","DOIUrl":"https://doi.org/10.1109/CRV.2014.44","url":null,"abstract":"Service robots share the living space of humans. Thus, they should have a similar concept of the environment without having everything labeled beforehand. The detection of closed doors is challenging because they appear with different materials, designs and can even include glass inlays. At the same time their detection is vital in any kind of navigation tasks in domestic environments. A typical 2D object recognition algorithm may not be able to handle the large optical variety of doors. Improvements of low-cost infrared 3D-sensors enable robots to perceive their environment as spatial structure. Therefore we propose a novel door detection algorithm that employs basic structural knowledge about doors and enables to extract parts of doors from point clouds based on constraint region growing. These parts get weighted with Gaussian probabilities and are combined to create an overall probability measure. To show the validity of our approach, a realistic dataset of different doors from different angles and distances was acquired.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125828510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Visual Saliency Improves Autonomous Visual Search 视觉显著性提高自主视觉搜索
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.23
Amir Rasouli, John K. Tsotsos
Visual search for a specific object in an unknown environment by autonomous robots is a complex task. The key challenge is to locate the object of interest while minimizing the cost of search in terms of time or energy consumption. Given the impracticality of examining all possible views of the search environment, recent studies suggest the use of attentive processes to optimize visual search. In this paper, we describe a method of visual search that exploits the use of attention in the form of a saliency map. This map is used to update the probability distribution of which areas to examine next, increasing the utility of spatial volumes where objects consistent with the target's visual saliency are observed. We present experimental results on a mobile robot and conclude that our method improves the process of visual search in terms of reducing the time and number of actions to be performed to complete the process.
自主机器人在未知环境中对特定物体进行视觉搜索是一项复杂的任务。关键的挑战是定位感兴趣的对象,同时最小化搜索的时间或能量消耗。考虑到检查搜索环境的所有可能的观点是不切实际的,最近的研究建议使用细心的过程来优化视觉搜索。在本文中,我们描述了一种视觉搜索方法,该方法以显著性图的形式利用注意力。该地图用于更新接下来要检查的区域的概率分布,增加空间体积的效用,其中观察到与目标视觉显著性一致的物体。我们在移动机器人上展示了实验结果,并得出结论,我们的方法在减少完成该过程所需执行的时间和数量方面改进了视觉搜索过程。
{"title":"Visual Saliency Improves Autonomous Visual Search","authors":"Amir Rasouli, John K. Tsotsos","doi":"10.1109/CRV.2014.23","DOIUrl":"https://doi.org/10.1109/CRV.2014.23","url":null,"abstract":"Visual search for a specific object in an unknown environment by autonomous robots is a complex task. The key challenge is to locate the object of interest while minimizing the cost of search in terms of time or energy consumption. Given the impracticality of examining all possible views of the search environment, recent studies suggest the use of attentive processes to optimize visual search. In this paper, we describe a method of visual search that exploits the use of attention in the form of a saliency map. This map is used to update the probability distribution of which areas to examine next, increasing the utility of spatial volumes where objects consistent with the target's visual saliency are observed. We present experimental results on a mobile robot and conclude that our method improves the process of visual search in terms of reducing the time and number of actions to be performed to complete the process.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Trinocular Spherical Stereo Vision for Indoor Surveillance 用于室内监视的三目球面立体视觉
Pub Date : 2014-05-06 DOI: 10.1109/CRV.2014.56
M. Findeisen, G. Hirtz
Stereo vision based sensors are widely used for indoor surveillance applications. Besides the demand for increasing performance the reduction of the overall number of sensors is the crucial issue. The central goal is the reduction of complexity and overall cost of the system. One opportunity is to use wide angle view based or even Omni directional stereo vision sensors. We present a powerful approach which uses three Omni directional cameras in order to compute full hemispherical depth information. By employing this, we can cover a complete room using only one sensor.
基于立体视觉的传感器广泛应用于室内监控。除了提高性能的需求外,减少传感器的总数也是关键问题。中心目标是降低系统的复杂性和总体成本。一个机会是使用广角视角甚至是全方位立体视觉传感器。我们提出了一种强大的方法,使用三个全方位相机来计算全半球深度信息。通过使用这种方法,我们可以只用一个传感器覆盖整个房间。
{"title":"Trinocular Spherical Stereo Vision for Indoor Surveillance","authors":"M. Findeisen, G. Hirtz","doi":"10.1109/CRV.2014.56","DOIUrl":"https://doi.org/10.1109/CRV.2014.56","url":null,"abstract":"Stereo vision based sensors are widely used for indoor surveillance applications. Besides the demand for increasing performance the reduction of the overall number of sensors is the crucial issue. The central goal is the reduction of complexity and overall cost of the system. One opportunity is to use wide angle view based or even Omni directional stereo vision sensors. We present a powerful approach which uses three Omni directional cameras in order to compute full hemispherical depth information. By employing this, we can cover a complete room using only one sensor.","PeriodicalId":385422,"journal":{"name":"2014 Canadian Conference on Computer and Robot Vision","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115935191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2014 Canadian Conference on Computer and Robot Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1