首页 > 最新文献

The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)最新文献

英文 中文
Design and implementation of a robot soccer team based on omni-directional wheels 基于全向轮的机器人足球队的设计与实现
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.31
Nadir Ould-Khessal
This paper describes work currently underway in building a robotic soccer team at Vaasa Polytechnic (BOTNIA). This contribution emphasises the design and implementation of a set of Jive robots based on omni-directional wheels. At the time of writing this paper, a set of Jive omni-directional robots have been designed and used in the testing and implementation of a cooperative robotic system for RoboCup games.
本文描述了瓦萨理工学院(BOTNIA)目前正在进行的建立机器人足球队的工作。这个贡献强调了一套基于全向轮的Jive机器人的设计和实现。在撰写本文时,已经设计了一套Jive全向机器人,并将其用于机器人世界杯比赛协作机器人系统的测试与实现。
{"title":"Design and implementation of a robot soccer team based on omni-directional wheels","authors":"Nadir Ould-Khessal","doi":"10.1109/CRV.2005.31","DOIUrl":"https://doi.org/10.1109/CRV.2005.31","url":null,"abstract":"This paper describes work currently underway in building a robotic soccer team at Vaasa Polytechnic (BOTNIA). This contribution emphasises the design and implementation of a set of Jive robots based on omni-directional wheels. At the time of writing this paper, a set of Jive omni-directional robots have been designed and used in the testing and implementation of a cooperative robotic system for RoboCup games.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131782255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Iterative corner extraction and matching for mosaic construction 基于迭代角点提取与匹配的马赛克拼接算法
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.50
Salem Alkaabi, F. Deravi
A rapid and automatic iterative corner extraction and matching for 2D mosaic construction is presented. This new system progressively estimates the geometric transformation parameters between two misaligned images. It combines corner extraction, matching, and transformation parameters estimation into an iterative scheme. By aligning the images in successive iterations, accuracy improves significantly. The accurately aligned images are used to re-extract new features, which are subsequently matched to select correspondences used to estimate a transformation with n-degrees of freedom. The false correspondences are suppressed progressively to achieve an accurate transformation estimate. The system is used to construct a mosaic from two misaligned images. The performance of the system is demonstrated experimentally using various images of differing complexity.
提出了一种快速自动迭代的二维拼接角点提取与匹配方法。该系统逐步估计两幅不对齐图像之间的几何变换参数。它将角点提取、匹配和变换参数估计结合到一个迭代方案中。通过在连续迭代中对齐图像,精度显着提高。使用精确对齐的图像重新提取新特征,随后将其匹配以选择用于估计具有n个自由度的转换的对应。伪对应被逐步抑制以获得准确的变换估计。该系统用于从两个不对齐的图像构建马赛克。该系统的性能通过实验证明了不同复杂性的图像。
{"title":"Iterative corner extraction and matching for mosaic construction","authors":"Salem Alkaabi, F. Deravi","doi":"10.1109/CRV.2005.50","DOIUrl":"https://doi.org/10.1109/CRV.2005.50","url":null,"abstract":"A rapid and automatic iterative corner extraction and matching for 2D mosaic construction is presented. This new system progressively estimates the geometric transformation parameters between two misaligned images. It combines corner extraction, matching, and transformation parameters estimation into an iterative scheme. By aligning the images in successive iterations, accuracy improves significantly. The accurately aligned images are used to re-extract new features, which are subsequently matched to select correspondences used to estimate a transformation with n-degrees of freedom. The false correspondences are suppressed progressively to achieve an accurate transformation estimate. The system is used to construct a mosaic from two misaligned images. The performance of the system is demonstrated experimentally using various images of differing complexity.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133700466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Collision and event detection using geometric features in spatio-temporal volumes 基于时空体几何特征的碰撞和事件检测
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.26
M. Bolduc, F. Deschênes
In video sequences, edges in 2D images (frames) produces 3D surface in the spatio-temporal volume, in this paper, we propose to consider temporal collisions between edges, and thus objects, as 3D ridges in the spatio-temporal volume. Collisions (i.e. ridge points) can be located using the maximum principal curvature and the principal curvature direction. Using the detected collisions, we then propose a technique to detect overlapping objects events in an image sequence, by neither computing depth or optical flow. We present successful experiments on real image sequences.
在视频序列中,二维图像(帧)中的边缘在时空体中产生三维表面,在本文中,我们建议将边缘和物体之间的时间碰撞视为时空体中的三维脊。可以使用最大主曲率和主曲率方向来定位碰撞(即脊点)。利用检测到的碰撞,我们提出了一种检测图像序列中重叠物体事件的技术,不需要计算深度或光流。我们在真实图像序列上进行了成功的实验。
{"title":"Collision and event detection using geometric features in spatio-temporal volumes","authors":"M. Bolduc, F. Deschênes","doi":"10.1109/CRV.2005.26","DOIUrl":"https://doi.org/10.1109/CRV.2005.26","url":null,"abstract":"In video sequences, edges in 2D images (frames) produces 3D surface in the spatio-temporal volume, in this paper, we propose to consider temporal collisions between edges, and thus objects, as 3D ridges in the spatio-temporal volume. Collisions (i.e. ridge points) can be located using the maximum principal curvature and the principal curvature direction. Using the detected collisions, we then propose a technique to detect overlapping objects events in an image sequence, by neither computing depth or optical flow. We present successful experiments on real image sequences.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"337 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133898105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Controlling camera and lights for intelligent image acquisition and merging 控制摄像机和灯光,实现智能图像采集和合并
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.29
O. Borzenko, Y. Lespérance, M. Jenkin
Docking craft in space and guiding mining machines are areas that often use remote video cameras equipped with one or more controllable light sources. In these applications, the problem of parameter selection arises: how to choose the best parameters for the camera and lights? Another problem is that a single image often cannot capture the whole scene properly and a composite image needs to be rendered. In this paper, we report on our progress with the CITO Lights and Camera project that addresses the parameter selection and merging problems for such systems. The prototype knowledge-based controller adjusts lighting to iteratively acquire a collection of images of a target. At every stage, an entropy-based merging module combines these images to produce a composite. The result is a final composite image that is optimized for further image processing tasks, such as pose estimation or tracking.
太空对接飞船和引导采矿机等领域经常使用配备有一个或多个可控光源的远程摄像机。在这些应用中,出现了参数选择的问题:如何为相机和灯光选择最佳参数?另一个问题是,单个图像往往不能正确地捕捉整个场景,需要渲染合成图像。在本文中,我们报告了我们在CITO灯和相机项目中的进展,该项目解决了此类系统的参数选择和合并问题。原型基于知识的控制器调整光照以迭代地获取目标的图像集合。在每个阶段,一个基于熵的合并模块将这些图像组合在一起,生成一个复合图像。结果是最终的合成图像,该图像被优化用于进一步的图像处理任务,例如姿态估计或跟踪。
{"title":"Controlling camera and lights for intelligent image acquisition and merging","authors":"O. Borzenko, Y. Lespérance, M. Jenkin","doi":"10.1109/CRV.2005.29","DOIUrl":"https://doi.org/10.1109/CRV.2005.29","url":null,"abstract":"Docking craft in space and guiding mining machines are areas that often use remote video cameras equipped with one or more controllable light sources. In these applications, the problem of parameter selection arises: how to choose the best parameters for the camera and lights? Another problem is that a single image often cannot capture the whole scene properly and a composite image needs to be rendered. In this paper, we report on our progress with the CITO Lights and Camera project that addresses the parameter selection and merging problems for such systems. The prototype knowledge-based controller adjusts lighting to iteratively acquire a collection of images of a target. At every stage, an entropy-based merging module combines these images to produce a composite. The result is a final composite image that is optimized for further image processing tasks, such as pose estimation or tracking.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"389 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115853255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Real-time detection of faces in video streams 实时检测视频流中的人脸
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.64
M. C. Santana, O. Déniz-Suárez, Cayetano Guerra, M. Hernández-Tejera
This paper describes a face detection system which goes beyond traditional approaches normally designed for still images. First the video stream context is considered to apply the detector, and therefore, the resulting system is designed taking into consideration a main feature available in a video stream, i.e. temporal coherence. The resulting system builds a feature based model for each detected face, and searches them using various model information in the next frame. The results achieved for video stream processing outperform Rowley-Kanade's and Viola-Jones' solutions providing eye and face data in a reduced time with a notable correct detection rate.
本文介绍了一种超越传统静态图像检测方法的人脸检测系统。首先考虑视频流上下文来应用检测器,因此,最终系统的设计考虑了视频流中可用的主要特征,即时间相干性。结果系统为每个检测到的人脸建立一个基于特征的模型,并在下一帧中使用各种模型信息进行搜索。视频流处理的结果优于Rowley-Kanade和Viola-Jones的解决方案,在更短的时间内提供眼睛和面部数据,并具有显著的正确检测率。
{"title":"Real-time detection of faces in video streams","authors":"M. C. Santana, O. Déniz-Suárez, Cayetano Guerra, M. Hernández-Tejera","doi":"10.1109/CRV.2005.64","DOIUrl":"https://doi.org/10.1109/CRV.2005.64","url":null,"abstract":"This paper describes a face detection system which goes beyond traditional approaches normally designed for still images. First the video stream context is considered to apply the detector, and therefore, the resulting system is designed taking into consideration a main feature available in a video stream, i.e. temporal coherence. The resulting system builds a feature based model for each detected face, and searches them using various model information in the next frame. The results achieved for video stream processing outperform Rowley-Kanade's and Viola-Jones' solutions providing eye and face data in a reduced time with a notable correct detection rate.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122602135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Comparing classification metrics for labeling segmented remote sensing images 比较遥感图像分割标记的分类指标
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.28
P. Maillard, David A Clausi
Image segmentation and labelling are the two conceptual operations in image classification. As the remote sensing community uses more powerful segmentation procedures with spatial constraint, new possibilities can be explored for labelling. Instead of assigning a label to a single observation (pixel), whole segments of image are labelled at once implying the use of multivariate samples rather than pixel vectors. This approach to image classification also offers new possibilities for using a priori information about the classes such as existing maps or object signature libraries. The present paper addresses the two issues. First a labelling scheme is presented that gathers evidence about the classes from incomplete a priori information using a "cognitive reasoning" approach. Then, five different metrics are compared for the label assignment and are combined through a voting scheme. The results show that very different results can be obtained depending on the metric chosen. The metric combination through voting, being a suboptimal approach does not necessarily provide the best results but could be a safe alternative to choosing only one metric.
图像分割和标记是图像分类中的两个概念操作。随着遥感界使用更强大的空间约束分割程序,可以探索新的标记可能性。不是为单个观察(像素)分配标签,而是一次性标记图像的整个部分,这意味着使用多变量样本而不是像素向量。这种图像分类方法还为使用关于类的先验信息(如现有映射或对象签名库)提供了新的可能性。本文论述了这两个问题。首先,提出了一种标记方案,该方案使用“认知推理”方法从不完整的先验信息中收集有关类别的证据。然后,对标签分配的五个不同指标进行比较,并通过投票方案进行组合。结果表明,根据度量的选择,可以得到非常不同的结果。通过投票的度量组合,作为一种次优方法,不一定能提供最好的结果,但可能是只选择一个度量的安全替代方案。
{"title":"Comparing classification metrics for labeling segmented remote sensing images","authors":"P. Maillard, David A Clausi","doi":"10.1109/CRV.2005.28","DOIUrl":"https://doi.org/10.1109/CRV.2005.28","url":null,"abstract":"Image segmentation and labelling are the two conceptual operations in image classification. As the remote sensing community uses more powerful segmentation procedures with spatial constraint, new possibilities can be explored for labelling. Instead of assigning a label to a single observation (pixel), whole segments of image are labelled at once implying the use of multivariate samples rather than pixel vectors. This approach to image classification also offers new possibilities for using a priori information about the classes such as existing maps or object signature libraries. The present paper addresses the two issues. First a labelling scheme is presented that gathers evidence about the classes from incomplete a priori information using a \"cognitive reasoning\" approach. Then, five different metrics are compared for the label assignment and are combined through a voting scheme. The results show that very different results can be obtained depending on the metric chosen. The metric combination through voting, being a suboptimal approach does not necessarily provide the best results but could be a safe alternative to choosing only one metric.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"145 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129645006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Head pose estimation of partially occluded faces 部分遮挡面部的头部姿态估计
Pub Date : 2005-05-09 DOI: 10.1109/CRV.2005.45
Markus T. Wenzel, W. Schiffmann
This paper describes an algorithm, which calculates the approximate head pose of partially occluded faces without training or manual initialization. The presented approach works on low-resolution Webcam images. The algorithm is based on the observation that for small depth rotations of a head the rotation angles can be approximated linearly. It uses the CamShift (continuous adaptive mean shift) algorithm to track the users head. With a pyramidal implementation of an iterative Lucas-Kanade optical flow algorithm, a certain feature point in the face is tracked. Pan and tilt of the head are estimated from the shift, of the feature point relative to the center of the head. 3D position and roll are estimated from the CamShift, results.
本文描述了一种无需训练或人工初始化即可计算部分遮挡人脸的近似头姿的算法。所提出的方法适用于低分辨率的网络摄像头图像。该算法基于对头部小深度旋转的观察,旋转角度可以线性近似。它使用CamShift(连续自适应平均移位)算法来跟踪用户的头部。利用迭代Lucas-Kanade光流算法的金字塔形实现,对人脸的某一特征点进行跟踪。从特征点相对于头部中心的位移来估计头部的平移和倾斜。从CamShift的结果中估计三维位置和滚动。
{"title":"Head pose estimation of partially occluded faces","authors":"Markus T. Wenzel, W. Schiffmann","doi":"10.1109/CRV.2005.45","DOIUrl":"https://doi.org/10.1109/CRV.2005.45","url":null,"abstract":"This paper describes an algorithm, which calculates the approximate head pose of partially occluded faces without training or manual initialization. The presented approach works on low-resolution Webcam images. The algorithm is based on the observation that for small depth rotations of a head the rotation angles can be approximated linearly. It uses the CamShift (continuous adaptive mean shift) algorithm to track the users head. With a pyramidal implementation of an iterative Lucas-Kanade optical flow algorithm, a certain feature point in the face is tracked. Pan and tilt of the head are estimated from the shift, of the feature point relative to the center of the head. 3D position and roll are estimated from the CamShift, results.","PeriodicalId":307318,"journal":{"name":"The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)","volume":"38 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113942858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
The 2nd Canadian Conference on Computer and Robot Vision (CRV'05)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1