首页 > 最新文献

Fourth Canadian Conference on Computer and Robot Vision (CRV '07)最新文献

英文 中文
Corridor Navigation and Obstacle Avoidance using Visual Potential for Mobile Robot 基于视觉势的移动机器人走廊导航与避障
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.21
N. Ohnishi, A. Imiya
In this paper, we develop an algorithm for corridor navigation and obstacle avoidance using visual potential for visual navigation by an autonomous mobile robot. The robot is equipped with a camera system which dynamically captures the environment. The visual potential is computed from an image sequence and optical flow computed from successive images captured by the camera mounted on the robot. Our robot selects a local pathway using the visual potential observed through its vision system. Our algorithm enables mobile robots to avoid obstacles without any knowledge of a robot workspace. We demonstrate experimental results using image sequences observed with a moving camera in a simulated environment and a real environment. Our algorithm is robust against the fluctuation of displacement caused by mechanical error of the mobile robot, and the fluctuation of planar-region detection caused by a numerical error in the computation of optical flow.
在本文中,我们开发了一种利用视觉势为自主移动机器人视觉导航的走廊导航和避障算法。机器人配备了一个动态捕捉环境的摄像系统。视觉电位由图像序列计算,光流由安装在机器人上的摄像机捕获的连续图像计算。我们的机器人利用视觉系统观察到的视觉电位选择局部路径。我们的算法使移动机器人能够在不了解机器人工作空间的情况下避开障碍物。我们演示了在模拟环境和真实环境中使用移动摄像机观察到的图像序列的实验结果。该算法对移动机器人的机械误差引起的位移波动和光流计算中的数值误差引起的平面区域检测波动具有较强的鲁棒性。
{"title":"Corridor Navigation and Obstacle Avoidance using Visual Potential for Mobile Robot","authors":"N. Ohnishi, A. Imiya","doi":"10.1109/CRV.2007.21","DOIUrl":"https://doi.org/10.1109/CRV.2007.21","url":null,"abstract":"In this paper, we develop an algorithm for corridor navigation and obstacle avoidance using visual potential for visual navigation by an autonomous mobile robot. The robot is equipped with a camera system which dynamically captures the environment. The visual potential is computed from an image sequence and optical flow computed from successive images captured by the camera mounted on the robot. Our robot selects a local pathway using the visual potential observed through its vision system. Our algorithm enables mobile robots to avoid obstacles without any knowledge of a robot workspace. We demonstrate experimental results using image sequences observed with a moving camera in a simulated environment and a real environment. Our algorithm is robust against the fluctuation of displacement caused by mechanical error of the mobile robot, and the fluctuation of planar-region detection caused by a numerical error in the computation of optical flow.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117093315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Dense Stereo Range Sensing with Marching Pseudo-Random Patterns 基于行进伪随机模式的密集立体距离传感
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.22
D. Desjardins, P. Payeur
As an extension to classical structured lighting techniques, the use of bi-dimensional pseudo-random color codes is explored to perform range sensing with variable density from a stereo calibrated rig and a projector. Pseudo-random codes are used to create artificial textures on a scene which are extracted and grouped in a confidence map to ensure reliable feature matching between pairs of images taken from two cameras. Depth estimation is performed on corresponding points with progressive refinement as the pseudo-random pattern projection is marched over the scene to increase the density of matched features, and achieve dense 3D reconstruction. The potential of bi-dimensional pseudo-random color patterns for structured lighting is demonstrated in terms of patterns computation, ease of extraction, matching confidence level, as well as density of depth estimation for 3D reconstruction.
作为经典结构化照明技术的延伸,我们探索了使用二维伪随机颜色代码来执行从立体校准钻机和投影仪的可变密度范围传感。伪随机代码用于在场景上创建人工纹理,这些纹理被提取并分组在置信度图中,以确保从两台相机拍摄的图像对之间的可靠特征匹配。在伪随机模式投影遍历场景的同时,对对应点进行深度估计,逐步细化,增加匹配特征的密度,实现密集三维重建。在模式计算、易于提取、匹配置信度以及三维重建的深度估计密度方面,展示了二维伪随机彩色图案用于结构照明的潜力。
{"title":"Dense Stereo Range Sensing with Marching Pseudo-Random Patterns","authors":"D. Desjardins, P. Payeur","doi":"10.1109/CRV.2007.22","DOIUrl":"https://doi.org/10.1109/CRV.2007.22","url":null,"abstract":"As an extension to classical structured lighting techniques, the use of bi-dimensional pseudo-random color codes is explored to perform range sensing with variable density from a stereo calibrated rig and a projector. Pseudo-random codes are used to create artificial textures on a scene which are extracted and grouped in a confidence map to ensure reliable feature matching between pairs of images taken from two cameras. Depth estimation is performed on corresponding points with progressive refinement as the pseudo-random pattern projection is marched over the scene to increase the density of matched features, and achieve dense 3D reconstruction. The potential of bi-dimensional pseudo-random color patterns for structured lighting is demonstrated in terms of patterns computation, ease of extraction, matching confidence level, as well as density of depth estimation for 3D reconstruction.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115956677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Computer Assisted Detection of Polycystic Ovary Morphology in Ultrasound Images 超声图像中多囊卵巢形态的计算机辅助检测
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.18
Maryruth J. Lawrence, M. Eramian, R. Pierson, E. Neufeld
Polycystic ovary syndrome (PCOS) is an endocrine abnormality with multiple diagnostic criteria due to its heterogenic manifestations. One of the diagnostic criteria includes analysis of ultrasound images of ovaries for the detection of number, size, and distribution of follicles within the ovary. This involves manual tracing and counting of follicles on the ultrasound images to determine the presence of a polycystic ovary (PCO). We describe a novel method that automates PCO detection. Our algorithm involves segmentation of follicles from ultrasound images, quantifying the attributes of the automatically segmented follicles using stereology, storing follicle attributes as feature vectors, and finally classification of the feature vector into two categories. The classification categories are: PCO present and PCO absent. An automatic PCO diagnostic tool would save considerable time spent on manual tracing of follicles and measuring the length and width of every follicle. Our procedure was able to achieve classification accuracy of 92.86% using a linear discriminant classifier. Our classifier will improve the rapidity and accuracy of PCOS diagnosis, reducing the risk of the severe complications that can arise from delayed diagnosis.
多囊卵巢综合征(PCOS)是一种内分泌异常,由于其异质性表现,具有多种诊断标准。诊断标准之一包括分析卵巢超声图像,以检测卵巢内卵泡的数量、大小和分布。这包括在超声图像上手工追踪和计数卵泡,以确定多囊卵巢(PCO)的存在。我们描述了一种新的方法,自动PCO检测。该算法首先对超声图像中的卵泡进行分割,然后利用立体学对自动分割的卵泡属性进行量化,将卵泡属性作为特征向量进行存储,最后将特征向量分为两类。分类类别为:有PCO和无PCO。一个自动PCO诊断工具将节省大量的时间花费在手工跟踪卵泡和测量每个卵泡的长度和宽度。使用线性判别分类器,我们的程序能够达到92.86%的分类准确率。我们的分类器将提高PCOS诊断的快速性和准确性,降低因延误诊断而产生的严重并发症的风险。
{"title":"Computer Assisted Detection of Polycystic Ovary Morphology in Ultrasound Images","authors":"Maryruth J. Lawrence, M. Eramian, R. Pierson, E. Neufeld","doi":"10.1109/CRV.2007.18","DOIUrl":"https://doi.org/10.1109/CRV.2007.18","url":null,"abstract":"Polycystic ovary syndrome (PCOS) is an endocrine abnormality with multiple diagnostic criteria due to its heterogenic manifestations. One of the diagnostic criteria includes analysis of ultrasound images of ovaries for the detection of number, size, and distribution of follicles within the ovary. This involves manual tracing and counting of follicles on the ultrasound images to determine the presence of a polycystic ovary (PCO). We describe a novel method that automates PCO detection. Our algorithm involves segmentation of follicles from ultrasound images, quantifying the attributes of the automatically segmented follicles using stereology, storing follicle attributes as feature vectors, and finally classification of the feature vector into two categories. The classification categories are: PCO present and PCO absent. An automatic PCO diagnostic tool would save considerable time spent on manual tracing of follicles and measuring the length and width of every follicle. Our procedure was able to achieve classification accuracy of 92.86% using a linear discriminant classifier. Our classifier will improve the rapidity and accuracy of PCOS diagnosis, reducing the risk of the severe complications that can arise from delayed diagnosis.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126774305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Figure-ground segmentation using a hierarchical conditional random field 使用分层条件随机场的图像-地面分割
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.32
Jordan Reynolds, Kevin P. Murphy
We propose an approach to the problem of detecting and segmenting generic object classes that combines three "off the shelf" components in a novel way. The components are a generic image segmenter that returns a set of "super pixels" at different scales; a generic classifier that can determine if an image region (such as one or more super pixels) contains (part of) the foreground object or not; and a generic belief propagation (BP) procedure for tree-structured graphical models. Our system combines the regions together into a hierarchical, tree-structured conditional random field, applies the classifier to each node (region), and fuses all the information together using belief propagation. Since our classifiers only rely on color and texture, they can handle deformable (non-rigid) objects such as animals, even under severe occlusion and rotation. We demonstrate good results for detecting and segmenting cows, cats and cars on the very challenging Pascal VOC dataset.
我们提出了一种检测和分割通用对象类的方法,该方法以一种新颖的方式结合了三个“现成的”组件。组件是一个通用的图像分割器,它返回一组不同尺度的“超级像素”;一个通用分类器,可以确定图像区域(如一个或多个超级像素)是否包含(部分)前景对象;以及树结构图形模型的通用信念传播(BP)过程。我们的系统将这些区域组合成一个分层的、树状结构的条件随机场,对每个节点(区域)应用分类器,并使用信念传播将所有信息融合在一起。由于我们的分类器只依赖于颜色和纹理,它们可以处理可变形(非刚性)的对象,如动物,甚至在严重的遮挡和旋转下。我们展示了在非常具有挑战性的Pascal VOC数据集上检测和分割奶牛、猫和汽车的良好结果。
{"title":"Figure-ground segmentation using a hierarchical conditional random field","authors":"Jordan Reynolds, Kevin P. Murphy","doi":"10.1109/CRV.2007.32","DOIUrl":"https://doi.org/10.1109/CRV.2007.32","url":null,"abstract":"We propose an approach to the problem of detecting and segmenting generic object classes that combines three \"off the shelf\" components in a novel way. The components are a generic image segmenter that returns a set of \"super pixels\" at different scales; a generic classifier that can determine if an image region (such as one or more super pixels) contains (part of) the foreground object or not; and a generic belief propagation (BP) procedure for tree-structured graphical models. Our system combines the regions together into a hierarchical, tree-structured conditional random field, applies the classifier to each node (region), and fuses all the information together using belief propagation. Since our classifiers only rely on color and texture, they can handle deformable (non-rigid) objects such as animals, even under severe occlusion and rotation. We demonstrate good results for detecting and segmenting cows, cats and cars on the very challenging Pascal VOC dataset.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127985477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes? 卢卡斯-卡纳德能否用于估计3D混乱场景中的运动视差?
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.15
V. Chapdelaine-Couture, M. Langer
When an observer moves in a 3D static scene, the motion field depends on the depth of the visible objects and on the observer's instantaneous translation and rotation. By computing the difference between nearby motion field vectors, the observer can estimate the direction of local motion parallax and in turn the direction of heading. It has recently been argued that, in 3D cluttered scenes such as a forest, computing local image motion using classical optical flow methods is problematic since these classical methods have problems at depth discontinuities. Hence, estimating local motion parallax from optical flow should be problematic as well. In this paper we evaluate this claim. We use the classical Lucas-Kanade method to estimate optical flow and the Rieger-Lawton method to estimate the direction of motion parallax from the estimated flow. We compare the motion parallax estimates to those of the frequency based method of Mann-Langer. We find that if the Lucas-Kanade estimates are sufficiently pruned, using both an eigenvalue condition and a mean absolute error condition, then the Lucas- Kanade/Rieger-Lawton method can perform as well as or better than the frequency-based method.
当观察者在3D静态场景中移动时,运动场取决于可见物体的深度以及观察者的瞬时平移和旋转。通过计算附近运动场矢量的差值,观测器可以估计出局部运动视差的方向,进而估计出航向的方向。最近有人认为,在诸如森林之类的3D杂乱场景中,使用经典光流方法计算局部图像运动是有问题的,因为这些经典方法在深度不连续处存在问题。因此,从光流估计局部运动视差也应该是有问题的。本文对这一说法进行了评价。我们用经典的Lucas-Kanade法估计光流,用Rieger-Lawton法从估计的光流估计运动视差的方向。我们将运动视差估计与基于频率的Mann-Langer方法进行了比较。我们发现,如果Lucas-Kanade估计被充分修剪,同时使用特征值条件和平均绝对误差条件,那么Lucas-Kanade /Rieger-Lawton方法可以表现得和基于频率的方法一样好,甚至更好。
{"title":"Can Lucas-Kanade be used to estimate motion parallax in 3D cluttered scenes?","authors":"V. Chapdelaine-Couture, M. Langer","doi":"10.1109/CRV.2007.15","DOIUrl":"https://doi.org/10.1109/CRV.2007.15","url":null,"abstract":"When an observer moves in a 3D static scene, the motion field depends on the depth of the visible objects and on the observer's instantaneous translation and rotation. By computing the difference between nearby motion field vectors, the observer can estimate the direction of local motion parallax and in turn the direction of heading. It has recently been argued that, in 3D cluttered scenes such as a forest, computing local image motion using classical optical flow methods is problematic since these classical methods have problems at depth discontinuities. Hence, estimating local motion parallax from optical flow should be problematic as well. In this paper we evaluate this claim. We use the classical Lucas-Kanade method to estimate optical flow and the Rieger-Lawton method to estimate the direction of motion parallax from the estimated flow. We compare the motion parallax estimates to those of the frequency based method of Mann-Langer. We find that if the Lucas-Kanade estimates are sufficiently pruned, using both an eigenvalue condition and a mean absolute error condition, then the Lucas- Kanade/Rieger-Lawton method can perform as well as or better than the frequency-based method.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114936017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Terrain Modelling for Planetary Exploration 行星探测的地形建模
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.63
Ioannis M. Rekleitis, Jean-Luc Bedwani, S. Gemme, T. Lamarche, E. Dupuis
The success of NASA's Mars Exploration Rovers has demonstrated the important benefits that mobility adds to planetary exploration. Very soon, mission requirements will impose that planetary exploration rovers drive autonomously in unknown terrain. This will require an evolution of the methods and technologies currently used. This paper presents our approach to 3D terrain reconstruction from large sparse range data sets, and the data reduction achieved through decimation. The outdoor experimental results demonstrate the effectiveness of the reconstructed terrain model for different types of terrain. We also present a first attempt to classify the terrain based on the scans properties.
美国宇航局火星探测漫游者的成功证明了机动性给行星探测带来的重要好处。很快,任务要求将迫使行星探测漫游者在未知地形上自主行驶。这将需要改进目前使用的方法和技术。本文介绍了基于大稀疏范围数据集的三维地形重建方法,以及通过抽取实现的数据约简。室外实验结果验证了该模型对不同地形类型的有效性。我们还提出了基于扫描属性对地形进行分类的第一次尝试。
{"title":"Terrain Modelling for Planetary Exploration","authors":"Ioannis M. Rekleitis, Jean-Luc Bedwani, S. Gemme, T. Lamarche, E. Dupuis","doi":"10.1109/CRV.2007.63","DOIUrl":"https://doi.org/10.1109/CRV.2007.63","url":null,"abstract":"The success of NASA's Mars Exploration Rovers has demonstrated the important benefits that mobility adds to planetary exploration. Very soon, mission requirements will impose that planetary exploration rovers drive autonomously in unknown terrain. This will require an evolution of the methods and technologies currently used. This paper presents our approach to 3D terrain reconstruction from large sparse range data sets, and the data reduction achieved through decimation. The outdoor experimental results demonstrate the effectiveness of the reconstructed terrain model for different types of terrain. We also present a first attempt to classify the terrain based on the scans properties.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129010762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Local Graph Matching for Object Category Recognition 局部图匹配的目标类别识别
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.44
E. F. Ersi, J. Zelek
A novel model for object category recognition in real-world scenes is proposed. Images in our model are represented by a set of triangular labelled graphs, each containing information on the appearance and geometry of a 3-tuple of distinctive image regions. In the learning stage, our model automatically learns a set of codebooks of model graphs for each object category, where each codebook contains information about which local structures may appear on which parts of the object instances of the target category. A two-stage method for optimal matching is developed, where in the first stage a Bayesian classifier based on ICA factorization is used efficiently to select the matched codebook, and in the second stage a nearest neighbourhood classifier is used to assign the test graph to one of the learned model graphs of the selected codebook. Each matched test graph casts votes for possible identity and poses of an object instance, and then a Hough transformation technique is used in the pose space to identify and localize the object instances. An extensive evaluation on several large datasets validates the robustness of our proposed model in object category recognition and localization in the presence of scale and rotation changes.
提出了一种新的现实场景对象分类识别模型。在我们的模型中,图像由一组三角形标记图表示,每个三角形标记图都包含关于不同图像区域的3元组的外观和几何形状的信息。在学习阶段,我们的模型自动为每个对象类别学习一组模型图的码本,其中每个码本包含关于哪些局部结构可能出现在目标类别的对象实例的哪些部分的信息。提出了一种两阶段的优化匹配方法,第一阶段利用基于ICA分解的贝叶斯分类器有效地选择匹配的码本,第二阶段利用最近邻分类器将测试图分配给所选码本的学习模型图之一。每个匹配的测试图对目标实例的可能身份和姿态进行投票,然后在姿态空间中使用霍夫变换技术来识别和定位目标实例。对几个大型数据集的广泛评估验证了我们提出的模型在存在尺度和旋转变化的情况下在对象类别识别和定位方面的鲁棒性。
{"title":"Local Graph Matching for Object Category Recognition","authors":"E. F. Ersi, J. Zelek","doi":"10.1109/CRV.2007.44","DOIUrl":"https://doi.org/10.1109/CRV.2007.44","url":null,"abstract":"A novel model for object category recognition in real-world scenes is proposed. Images in our model are represented by a set of triangular labelled graphs, each containing information on the appearance and geometry of a 3-tuple of distinctive image regions. In the learning stage, our model automatically learns a set of codebooks of model graphs for each object category, where each codebook contains information about which local structures may appear on which parts of the object instances of the target category. A two-stage method for optimal matching is developed, where in the first stage a Bayesian classifier based on ICA factorization is used efficiently to select the matched codebook, and in the second stage a nearest neighbourhood classifier is used to assign the test graph to one of the learned model graphs of the selected codebook. Each matched test graph casts votes for possible identity and poses of an object instance, and then a Hough transformation technique is used in the pose space to identify and localize the object instances. An extensive evaluation on several large datasets validates the robustness of our proposed model in object category recognition and localization in the presence of scale and rotation changes.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117308905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
3D Tree-Structured Object Tracking for Autonomous Ground Vehicles 自主地面车辆的三维树结构目标跟踪
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.1
Changsoo Jeong, A. C. Parker
Safe and effective vision analysis is a key capability for autonomous ground vehicle (AGV) guidance systems. The complexity of natural settings requires the use of a robust image understanding technique. The proposed novel 3D tree-structured object tracking approach is implemented by tracking 2D objects in successive video frames using a wavelet-domain tree structure. It is robust and reliable due to its powerful data structure and also adaptable for moving and stationary object tracking as well as the tracking problem when the vehicle itself is in motion. This approach consists of wavelet decomposition, spatial object detection and temporal object tracking. The results show this approach can produce precise detection and tracking results.
安全有效的视觉分析是自主地面车辆(AGV)制导系统的一项关键能力。自然环境的复杂性要求使用强大的图像理解技术。该方法利用小波域树形结构跟踪连续视频帧中的二维目标。该方法具有强大的数据结构,具有鲁棒性和可靠性,适用于运动和静止物体的跟踪以及车辆自身运动时的跟踪问题。该方法包括小波分解、空间目标检测和时间目标跟踪。结果表明,该方法能产生精确的检测和跟踪结果。
{"title":"3D Tree-Structured Object Tracking for Autonomous Ground Vehicles","authors":"Changsoo Jeong, A. C. Parker","doi":"10.1109/CRV.2007.1","DOIUrl":"https://doi.org/10.1109/CRV.2007.1","url":null,"abstract":"Safe and effective vision analysis is a key capability for autonomous ground vehicle (AGV) guidance systems. The complexity of natural settings requires the use of a robust image understanding technique. The proposed novel 3D tree-structured object tracking approach is implemented by tracking 2D objects in successive video frames using a wavelet-domain tree structure. It is robust and reliable due to its powerful data structure and also adaptable for moving and stationary object tracking as well as the tracking problem when the vehicle itself is in motion. This approach consists of wavelet decomposition, spatial object detection and temporal object tracking. The results show this approach can produce precise detection and tracking results.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117179625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Efficient camera motion and 3D recovery using an inertial sensor 有效的相机运动和3D恢复使用惯性传感器
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.23
M. Labrie, P. Hébert
This paper presents a system for 3D reconstruction using a camera combined with an inertial sensor. The system mainly exploits the orientation obtained from the inertial sensor in order to accelerate and improve the matching process between wide baseline images. The orientation further contributes to incremental 3D reconstruction of a set of feature points from linear equation systems. The processing can be performed online while using consecutive groups of three images overlapping each other. Classic or incremental bundle adjustment is applied to improve the quality of the model. Test validation has been performed on object and camera centric sequences.
本文提出了一种结合惯性传感器的相机三维重建系统。该系统主要利用惯性传感器获得的定位信息来加速和改善宽基线图像之间的匹配过程。该方向进一步有助于线性方程系统的一组特征点的增量三维重建。处理可以在线执行,同时使用三个图像相互重叠的连续组。应用经典或增量束调整来提高模型的质量。测试验证已执行对象和相机为中心的序列。
{"title":"Efficient camera motion and 3D recovery using an inertial sensor","authors":"M. Labrie, P. Hébert","doi":"10.1109/CRV.2007.23","DOIUrl":"https://doi.org/10.1109/CRV.2007.23","url":null,"abstract":"This paper presents a system for 3D reconstruction using a camera combined with an inertial sensor. The system mainly exploits the orientation obtained from the inertial sensor in order to accelerate and improve the matching process between wide baseline images. The orientation further contributes to incremental 3D reconstruction of a set of feature points from linear equation systems. The processing can be performed online while using consecutive groups of three images overlapping each other. Classic or incremental bundle adjustment is applied to improve the quality of the model. Test validation has been performed on object and camera centric sequences.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121621473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Energy Efficient Robot Rendezvous 节能机器人交会
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.27
Pawel Zebrowski, Y. Litus, R. Vaughan
We examine the problem of finding a single meeting location for a group of heterogeneous autonomous mobile robots, such that the total system cost of traveling to the rendezvous is minimized. We propose two algorithms that solve this problem. The first method computes an approximate globally optimal meeting point using numerical simplex minimization. The second method is a computationally cheap heuristic that computes a local heading for each robot: by iterating this method, all robots arrive at the globally optimal location. We compare the performance of both methods to a naive algorithm (center of mass). Finally, we show how to extend the methods with inter-robot communication to adapt to new environmental information.
我们研究了为一组异构自主移动机器人找到一个单一的集合地点的问题,使得到达集合地点的总系统成本最小。我们提出了两种算法来解决这个问题。第一种方法采用数值单纯形最小化法计算近似全局最优会合点。第二种方法是一种计算成本较低的启发式方法,它计算每个机器人的局部航向:通过迭代该方法,所有机器人到达全局最优位置。我们将这两种方法的性能与朴素算法(质心)进行比较。最后,我们展示了如何扩展机器人间通信的方法以适应新的环境信息。
{"title":"Energy Efficient Robot Rendezvous","authors":"Pawel Zebrowski, Y. Litus, R. Vaughan","doi":"10.1109/CRV.2007.27","DOIUrl":"https://doi.org/10.1109/CRV.2007.27","url":null,"abstract":"We examine the problem of finding a single meeting location for a group of heterogeneous autonomous mobile robots, such that the total system cost of traveling to the rendezvous is minimized. We propose two algorithms that solve this problem. The first method computes an approximate globally optimal meeting point using numerical simplex minimization. The second method is a computationally cheap heuristic that computes a local heading for each robot: by iterating this method, all robots arrive at the globally optimal location. We compare the performance of both methods to a naive algorithm (center of mass). Finally, we show how to extend the methods with inter-robot communication to adapt to new environmental information.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122728226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
Fourth Canadian Conference on Computer and Robot Vision (CRV '07)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1