首页 > 最新文献

The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)最新文献

英文 中文
Topology inference for a vision-based sensor network 基于视觉的传感器网络拓扑推理
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.81
D. Marinakis, G. Dudek
In this paper we describe a technique to infer the topology and connectivity information of a network of cameras based on observed motion in the environment. While the technique can use labels from reliable cameras systems, the algorithm is powerful enough to function using ambiguous tracking data. The method requires no prior knowledge of the relative locations of the cameras and operates under very weak environmental assumptions. Our approach stochastically samples plausible agent trajectories based on a delay model that allows for transitions to and from sources and sinks in the environment. The technique demonstrates considerable robustness both to sensor error and non-trivial patterns of agent motion. The output of the method is a Markov model describing the behavior of agents in the system and the underlying traffic patterns. The concept is demonstrated with simulation data and verified with experiments conducted on a six camera sensor network.
本文描述了一种基于环境中观察到的运动来推断摄像机网络拓扑结构和连通性信息的技术。虽然这项技术可以使用来自可靠相机系统的标签,但该算法足够强大,可以使用模糊的跟踪数据。该方法不需要事先知道相机的相对位置,并且在非常弱的环境假设下运行。我们的方法是基于延迟模型随机抽样可信的代理轨迹,该模型允许从环境中的源和汇过渡。该技术对传感器误差和智能体运动的非平凡模式都具有相当的鲁棒性。该方法的输出是一个马尔可夫模型,该模型描述了系统中代理的行为和底层流量模式。通过仿真数据验证了该概念,并通过在六摄像头传感器网络上进行的实验进行了验证。
{"title":"Topology inference for a vision-based sensor network","authors":"D. Marinakis, G. Dudek","doi":"10.1109/CRV.2005.81","DOIUrl":"https://doi.org/10.1109/CRV.2005.81","url":null,"abstract":"In this paper we describe a technique to infer the topology and connectivity information of a network of cameras based on observed motion in the environment. While the technique can use labels from reliable cameras systems, the algorithm is powerful enough to function using ambiguous tracking data. The method requires no prior knowledge of the relative locations of the cameras and operates under very weak environmental assumptions. Our approach stochastically samples plausible agent trajectories based on a delay model that allows for transitions to and from sources and sinks in the environment. The technique demonstrates considerable robustness both to sensor error and non-trivial patterns of agent motion. The output of the method is a Markov model describing the behavior of agents in the system and the underlying traffic patterns. The concept is demonstrated with simulation data and verified with experiments conducted on a six camera sensor network.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115615698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Real-time video surveillance with self-organizing maps 带有自组织地图的实时视频监控
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.65
M. Dahmane, J. Meunier
In this paper, we present an approach for video surveillance involving (a) moving object detection, (b) tracking and (c) normal/abnormal event recognition. The detection step uses an adaptive background subtraction technique with a shadow elimination model based on the color constancy principle. The target tracking involves a direct and inverse matrix matching process. The novelty of the paper lies mainly in the recognition stage, where we consider local motion properties (flow vector), and more global ones expressed by elliptic Fourier descriptors. From these temporal trajectory characterizations, two Kohonen maps allow to distinguish normal behavior from abnormal or suspicious ones. The classification results show a 94.6 % correct recognition rate with video sequences taken by a low cost webcam. Finally, this algorithm can be fully implemented in real-time.
在本文中,我们提出了一种视频监控方法,涉及(a)移动目标检测,(b)跟踪和(c)正常/异常事件识别。检测步骤采用基于颜色恒定原理的阴影消除模型的自适应背景减法技术。目标跟踪涉及到一个正逆矩阵匹配过程。本文的新颖之处主要在于识别阶段,在此阶段我们考虑了局部运动特性(流矢量),以及由椭圆傅里叶描述子表达的更多全局运动特性。从这些时间轨迹特征中,两个Kohonen地图可以区分正常行为与异常或可疑行为。分类结果表明,对低成本网络摄像机拍摄的视频序列,识别率为94.6%。最后,该算法可以完全实现实时性。
{"title":"Real-time video surveillance with self-organizing maps","authors":"M. Dahmane, J. Meunier","doi":"10.1109/CRV.2005.65","DOIUrl":"https://doi.org/10.1109/CRV.2005.65","url":null,"abstract":"In this paper, we present an approach for video surveillance involving (a) moving object detection, (b) tracking and (c) normal/abnormal event recognition. The detection step uses an adaptive background subtraction technique with a shadow elimination model based on the color constancy principle. The target tracking involves a direct and inverse matrix matching process. The novelty of the paper lies mainly in the recognition stage, where we consider local motion properties (flow vector), and more global ones expressed by elliptic Fourier descriptors. From these temporal trajectory characterizations, two Kohonen maps allow to distinguish normal behavior from abnormal or suspicious ones. The classification results show a 94.6 % correct recognition rate with video sequences taken by a low cost webcam. Finally, this algorithm can be fully implemented in real-time.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116893423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Recognizing hand-raising gestures using HMM 使用HMM识别举手手势
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.67
M. Hossain, M. Jenkin
Automatic attention-seeking gesture recognition is an enabling element of synchronous distance learning. Recognizing attention seeking gestures is complicated by the temporal nature of the signal that must be recognized and by the similarity between attention seeking gestures and non-attention seeking gestures. Here we describe two approaches to the recognition problem that utilize HMMs to learn the class of attention seeking gestures. An explicit approach that encodes the temporal nature of the gestures within the HMM, and an implicit approach that augments the input token sequence with temporal markers are presented. Experimental results demonstrate that the explicit approach is more accurate.
自动寻求注意力的手势识别是同步远程学习的一个使能元素。由于必须识别的信号的时间性质以及注意寻求手势和非注意寻求手势之间的相似性,识别注意寻求手势变得复杂。在这里,我们描述了两种利用hmm学习注意寻求手势类的识别问题的方法。提出了一种显式方法,对HMM中手势的时间性质进行编码,以及一种隐式方法,用时间标记增加输入令牌序列。实验结果表明,显式方法更准确。
{"title":"Recognizing hand-raising gestures using HMM","authors":"M. Hossain, M. Jenkin","doi":"10.1109/CRV.2005.67","DOIUrl":"https://doi.org/10.1109/CRV.2005.67","url":null,"abstract":"Automatic attention-seeking gesture recognition is an enabling element of synchronous distance learning. Recognizing attention seeking gestures is complicated by the temporal nature of the signal that must be recognized and by the similarity between attention seeking gestures and non-attention seeking gestures. Here we describe two approaches to the recognition problem that utilize HMMs to learn the class of attention seeking gestures. An explicit approach that encodes the temporal nature of the gestures within the HMM, and an implicit approach that augments the input token sequence with temporal markers are presented. Experimental results demonstrate that the explicit approach is more accurate.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121090648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Multi-view head pose estimation using neural networks 基于神经网络的多视角头部姿态估计
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.55
M. Voit, Kai Nickel, R. Stiefelhagen
In the context of human-computer interaction, information about head pose is an important cue for building a statement about humans' focus of attention. In this paper, we present an approach to estimate horizontal head rotation of people inside a smart-room. This room is equipped with multiple cameras that aim to provide at least one facial view of the user at any location in the room. We use neural networks that were trained on samples of rotated heads in order to classify each camera view. Whenever there is more than one estimate of head rotation, we combine the different estimates into one joint hypothesis. We show experimentally, that by using the proposed combination scheme, the mean error for unknown users could be reduced by up to 50% when combining the estimates from multiple cameras.
在人机交互的背景下,关于头部姿势的信息是构建人类注意力焦点陈述的重要线索。在本文中,我们提出了一种估算智能房间内人们水平头部旋转的方法。这个房间配备了多个摄像头,目的是在房间的任何位置提供至少一个用户的面部视图。我们使用在旋转头部样本上训练的神经网络来对每个摄像机视图进行分类。每当有多个头部旋转的估计时,我们将不同的估计合并到一个联合假设中。我们通过实验证明,通过使用所提出的组合方案,当组合来自多个摄像机的估计时,未知用户的平均误差可以减少高达50%。
{"title":"Multi-view head pose estimation using neural networks","authors":"M. Voit, Kai Nickel, R. Stiefelhagen","doi":"10.1109/CRV.2005.55","DOIUrl":"https://doi.org/10.1109/CRV.2005.55","url":null,"abstract":"In the context of human-computer interaction, information about head pose is an important cue for building a statement about humans' focus of attention. In this paper, we present an approach to estimate horizontal head rotation of people inside a smart-room. This room is equipped with multiple cameras that aim to provide at least one facial view of the user at any location in the room. We use neural networks that were trained on samples of rotated heads in order to classify each camera view. Whenever there is more than one estimate of head rotation, we combine the different estimates into one joint hypothesis. We show experimentally, that by using the proposed combination scheme, the mean error for unknown users could be reduced by up to 50% when combining the estimates from multiple cameras.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127382085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Detecting abnormal gait 异常步态检测
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.32
C. Bauckhage, John K. Tsotsos, F. Bunn
Analyzing human gait has become popular in computer vision. So far, however, contributions to this topic almost exclusively considered the problem of person identification. In this paper, we view gait analysis from a different angle and shall examine its use as a means to deduce the physical condition of people. Understanding the detection of unusual movement patterns as a two class problem leads to the idea of using support vector machines for classification. We thus present a homeomorphisms between 2D lattices and binary shapes that provides a robust vector space embedding of body silhouettes. Experimental results underline that feature vectors obtained from this scheme are well suited to detect abnormal gait wavering, faltering, and falling can be detected reliably across individuals without tracking or recognizing limbs or body parts.
在计算机视觉中,分析人类的步态已经成为一种流行的方法。然而,到目前为止,对这一主题的贡献几乎完全考虑了人的身份问题。在本文中,我们从不同的角度来看待步态分析,并将研究它作为推断人的身体状况的手段。将异常运动模式的检测理解为两类问题,就产生了使用支持向量机进行分类的想法。因此,我们提出了二维晶格和二元形状之间的同胚,提供了身体轮廓的鲁棒向量空间嵌入。实验结果表明,该方案获得的特征向量可以在不跟踪或识别肢体或身体部位的情况下可靠地检测到个体的异常步态摇摆、蹒跚和跌倒。
{"title":"Detecting abnormal gait","authors":"C. Bauckhage, John K. Tsotsos, F. Bunn","doi":"10.1109/CRV.2005.32","DOIUrl":"https://doi.org/10.1109/CRV.2005.32","url":null,"abstract":"Analyzing human gait has become popular in computer vision. So far, however, contributions to this topic almost exclusively considered the problem of person identification. In this paper, we view gait analysis from a different angle and shall examine its use as a means to deduce the physical condition of people. Understanding the detection of unusual movement patterns as a two class problem leads to the idea of using support vector machines for classification. We thus present a homeomorphisms between 2D lattices and binary shapes that provides a robust vector space embedding of body silhouettes. Experimental results underline that feature vectors obtained from this scheme are well suited to detect abnormal gait wavering, faltering, and falling can be detected reliably across individuals without tracking or recognizing limbs or body parts.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122315049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A computer vision system for monitoring medication intake 用于监测药物摄入的计算机视觉系统
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.5
David Batz, Michael Batz, N. Lobo, M. Shah
We propose a computer vision system to assist a human user in the monitoring of their medication habits. This task must be accomplished without the knowledge of any pill locations, as they are too small to track with a static camera, and are usually occluded. At the core of this process is a mixture of low-level, high-level, and heuristic techniques such as skin segmentation, face detection, template matching, and a novel approach to hand localization and occlusion handling. We discuss the approach taken towards this goal, along with the results of our testing phase.
我们提出了一个计算机视觉系统,以协助人类用户监测他们的用药习惯。这项任务必须在不知道任何药丸位置的情况下完成,因为它们太小,无法用静态相机跟踪,而且通常被遮挡。该过程的核心是混合低级,高级和启发式技术,如皮肤分割,人脸检测,模板匹配,以及手部定位和遮挡处理的新方法。我们将讨论实现这一目标所采取的方法,以及测试阶段的结果。
{"title":"A computer vision system for monitoring medication intake","authors":"David Batz, Michael Batz, N. Lobo, M. Shah","doi":"10.1109/CRV.2005.5","DOIUrl":"https://doi.org/10.1109/CRV.2005.5","url":null,"abstract":"We propose a computer vision system to assist a human user in the monitoring of their medication habits. This task must be accomplished without the knowledge of any pill locations, as they are too small to track with a static camera, and are usually occluded. At the core of this process is a mixture of low-level, high-level, and heuristic techniques such as skin segmentation, face detection, template matching, and a novel approach to hand localization and occlusion handling. We discuss the approach taken towards this goal, along with the results of our testing phase.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122480991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
A quantitative comparison of 4 algorithms for recovering dense accurate depth 4种密集精确深度恢复算法的定量比较
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.11
Baozhong Tian, J. Barron
We report on four algorithms for recovering dense depth maps from long image sequences, where the camera motion is known a priori. All methods use a Kalman filter to integrate intensity derivatives or optical flow over time to increase accuracy.
我们报告了从长图像序列中恢复密集深度图的四种算法,其中相机运动是已知的先验。所有方法都使用卡尔曼滤波器来积分强度导数或光流随时间的变化,以提高精度。
{"title":"A quantitative comparison of 4 algorithms for recovering dense accurate depth","authors":"Baozhong Tian, J. Barron","doi":"10.1109/CRV.2005.11","DOIUrl":"https://doi.org/10.1109/CRV.2005.11","url":null,"abstract":"We report on four algorithms for recovering dense depth maps from long image sequences, where the camera motion is known a priori. All methods use a Kalman filter to integrate intensity derivatives or optical flow over time to increase accuracy.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128236281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A linear Euclidean distance transform algorithm based on the linear-time Legendre transform 基于线性时间勒让德变换的线性欧氏距离变换算法
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.7
Y. Lucet
We introduce a new exact Euclidean distance transform algorithm for binary images based on the linear-time Legendre Transform algorithm. The three-step algorithm uses dimension reduction and convex analysis results on the Legendre-Fenchel transform to achieve linear-time complexity. First, computation on a grid (the image) is reduced to computation on a line, then the convex envelope is computed, and finally the squared Euclidean distance transform is obtained. Examples and an extension to non-binary images are provided.
提出了一种基于线性时间勒让德变换的二值图像精确欧氏距离变换算法。三步算法利用降维和对legende - fenchel变换的凸分析结果来实现线性时间复杂度。首先将网格(图像)上的计算简化为直线上的计算,然后计算凸包络,最后得到欧氏距离的平方变换。提供了非二进制图像的示例和扩展。
{"title":"A linear Euclidean distance transform algorithm based on the linear-time Legendre transform","authors":"Y. Lucet","doi":"10.1109/CRV.2005.7","DOIUrl":"https://doi.org/10.1109/CRV.2005.7","url":null,"abstract":"We introduce a new exact Euclidean distance transform algorithm for binary images based on the linear-time Legendre Transform algorithm. The three-step algorithm uses dimension reduction and convex analysis results on the Legendre-Fenchel transform to achieve linear-time complexity. First, computation on a grid (the image) is reduced to computation on a line, then the convex envelope is computed, and finally the squared Euclidean distance transform is obtained. Examples and an extension to non-binary images are provided.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128493134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A hierarchical nonparametric method for capturing nonrigid deformations 捕获非刚性变形的分层非参数方法
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.6
A. Ecker, S. Ullman
We present a novel approach for measuring deformations between image patches. Our algorithm is a variant of dynamic programming that is not inherently one-dimensional, and its scores are on a relative scale. The method is based on the combination of similarities between many overlapping sub-patches. The algorithm is designed to be robust to small deformations of parts at various positions and scales.
我们提出了一种测量图像斑块之间变形的新方法。我们的算法是动态规划的一种变体,它本身不是一维的,它的得分是相对的。该方法是基于多个重叠子补丁之间相似性的组合。该算法对零件在不同位置和尺度下的微小变形具有较强的鲁棒性。
{"title":"A hierarchical nonparametric method for capturing nonrigid deformations","authors":"A. Ecker, S. Ullman","doi":"10.1109/CRV.2005.6","DOIUrl":"https://doi.org/10.1109/CRV.2005.6","url":null,"abstract":"We present a novel approach for measuring deformations between image patches. Our algorithm is a variant of dynamic programming that is not inherently one-dimensional, and its scores are on a relative scale. The method is based on the combination of similarities between many overlapping sub-patches. The algorithm is designed to be robust to small deformations of parts at various positions and scales.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116269316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Scene reconstruction with sparse range information 稀疏距离信息的场景重建
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.70
Guangyi Chen, G. Dudek, L. Torres-Méndez
This paper addresses an approach to scene reconstruction by inferring missing range data in a partial range map based on intensity image and sparse initial range data. It is assumed that the initial known range data is given on a number of scan lines one pixel width. This assumption is natural for a range sensor to acquire range data in a 3D real world environment. Both edge information of the intensity image and linear interpolation of the range data are used. Experiments show that this method gives very good results in inferring missing range data. It outperforms both the previous method and bilinear interpolation when a very small percentage of range data is known.
本文提出了一种基于强度图像和稀疏初始距离数据推断局部距离图中缺失距离数据的场景重建方法。假定初始已知距离数据是在若干条扫描线上给出的,扫描线的宽度为一像素。这个假设对于距离传感器在三维现实环境中获取距离数据是很自然的。同时利用了强度图像的边缘信息和距离数据的线性插值。实验表明,该方法在推断缺失距离数据方面取得了很好的效果。当已知很小比例的距离数据时,它优于之前的方法和双线性插值。
{"title":"Scene reconstruction with sparse range information","authors":"Guangyi Chen, G. Dudek, L. Torres-Méndez","doi":"10.1109/CRV.2005.70","DOIUrl":"https://doi.org/10.1109/CRV.2005.70","url":null,"abstract":"This paper addresses an approach to scene reconstruction by inferring missing range data in a partial range map based on intensity image and sparse initial range data. It is assumed that the initial known range data is given on a number of scan lines one pixel width. This assumption is natural for a range sensor to acquire range data in a 3D real world environment. Both edge information of the intensity image and linear interpolation of the range data are used. Experiments show that this method gives very good results in inferring missing range data. It outperforms both the previous method and bilinear interpolation when a very small percentage of range data is known.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114716652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1