首页 > 最新文献

2013 IEEE Workshop on Robot Vision (WORV)最新文献

英文 中文
Calibration of a network of Kinect sensors for robotic inspection over a large workspace 校准Kinect传感器网络,用于大型工作空间的机器人检查
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521936
R. Macknojia, A. Chávez-Aragón, P. Payeur, R. Laganière
This paper presents an approach for calibrating a network of Kinect devices used to guide robotic arms with rapidly acquired 3D models. The method takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy within the range of the depth measurements accuracy provided by this technology. The internal calibration of the sensor in between the color and depth measurement is also presented. The resulting system is developed to inspect large objects, such as vehicles, positioned within an enlarged field of view created by the network of RGB-D sensors.
本文提出了一种校准Kinect设备网络的方法,该网络用于引导具有快速获取的3D模型的机械臂。该方法利用了Kinect传感器内嵌的快速3D测量技术,并在该技术提供的深度测量精度范围内提供配准精度。介绍了传感器在颜色测量和深度测量之间的内部校准。由此产生的系统被开发用于检查位于RGB-D传感器网络创建的扩大视野内的大型物体,例如车辆。
{"title":"Calibration of a network of Kinect sensors for robotic inspection over a large workspace","authors":"R. Macknojia, A. Chávez-Aragón, P. Payeur, R. Laganière","doi":"10.1109/WORV.2013.6521936","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521936","url":null,"abstract":"This paper presents an approach for calibrating a network of Kinect devices used to guide robotic arms with rapidly acquired 3D models. The method takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy within the range of the depth measurements accuracy provided by this technology. The internal calibration of the sensor in between the color and depth measurement is also presented. The resulting system is developed to inspect large objects, such as vehicles, positioned within an enlarged field of view created by the network of RGB-D sensors.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130356153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Sensitivity evaluation of embedded code detection in imperceptible structured light sensing 难以察觉结构光传感中嵌入代码检测的灵敏度评价
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521910
Jingwen Dai, R. Chung
We address the use of pre-trained primitive-shape detectors for identifying embedded codes in imperceptible structured light (ISL) sensing. The accuracy of the whole sensing system is determined by the performance of such detectors. In training-based methods, generalization of the training results is often an issue, and it is especially so when the work scenario could have substantial variation between the training stage and the operation stage. This paper presents sensitivity evaluation results of embedded code detection in ISL sensing, together with the associated statistical analysis. They show that the scheme of embedding imperceptible codes into normal video projection can be maintained effective despite possible variations on sensing distance, projection-surface orientation, projection-surface shape, projection-surface texture and hardware configuration. The finding indicates the feasibility of integrating the ISL method into robotic systems for operation over a wide domain of circumstances.
我们解决了使用预先训练的原始形状检测器来识别难以察觉的结构光(ISL)传感中的嵌入代码。整个传感系统的精度取决于这些探测器的性能。在基于培训的方法中,培训结果的泛化往往是一个问题,特别是当工作场景在培训阶段和操作阶段之间可能有很大差异时。本文给出了ISL传感中嵌入式代码检测的灵敏度评价结果,并进行了相关的统计分析。他们表明,尽管感知距离、投影面方向、投影面形状、投影面纹理和硬件配置可能发生变化,但将难以察觉的代码嵌入到普通视频投影中的方案可以保持有效。这一发现表明,将ISL方法集成到机器人系统中,可以在广泛的情况下进行操作。
{"title":"Sensitivity evaluation of embedded code detection in imperceptible structured light sensing","authors":"Jingwen Dai, R. Chung","doi":"10.1109/WORV.2013.6521910","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521910","url":null,"abstract":"We address the use of pre-trained primitive-shape detectors for identifying embedded codes in imperceptible structured light (ISL) sensing. The accuracy of the whole sensing system is determined by the performance of such detectors. In training-based methods, generalization of the training results is often an issue, and it is especially so when the work scenario could have substantial variation between the training stage and the operation stage. This paper presents sensitivity evaluation results of embedded code detection in ISL sensing, together with the associated statistical analysis. They show that the scheme of embedding imperceptible codes into normal video projection can be maintained effective despite possible variations on sensing distance, projection-surface orientation, projection-surface shape, projection-surface texture and hardware configuration. The finding indicates the feasibility of integrating the ISL method into robotic systems for operation over a wide domain of circumstances.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115415573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trinocular visual odometry for divergent views with minimal overlap 用最小重叠的不同视角进行三目视觉里程测定
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521943
Jaeheon Jeong, J. Mulligan, N. Correll
We present a visual odometry algorithm for trinocular systems with divergent views and minimal overlap. Whereas the bundle adjustment is the preferred method for multi-view visual odometry problems, it is infeasible if the number of features in the images-such as in HD videos-is large. We propose a divide and conquer approach, which reduces the trinocular visual odometry problem to five monocular visual odometry problems, one for each individual camera sequence and two more using features matched temporally from consecutive images from the center to the left and right cameras, respectively. Unlike the bundle adjustment method, whose computational complexity is O(n3), the proposed approach allows to match features only between neighboring cameras and can therefore be executed in O(n2). Assuming constant motion of the cameras, temporal tracking therefore allows us to make up for the missing overlap between cameras as objects from the center view eventually appear in the left or right camera. The scale factors that cannot be determined by monocular visual odometry are computed by constructing a system of equations based on known relative camera pose and the five monocular VO estimates. The system is solved using a weighted least squares scheme and remains over-defined even when the camera path follows a straight line. We evaluate the resulting system using synthetic and real video sequences that were recorded for a virtual exercise environment.
我们提出了一种具有发散视点和最小重叠的三视系统的视觉里程计算法。虽然束调整是多视图视觉里程计问题的首选方法,但如果图像中的特征数量(例如高清视频)很大,则不可行。我们提出了一种分而治之的方法,该方法将三目视觉里程计问题减少到五个单目视觉里程计问题,每个单目视觉里程计问题一个,另外两个使用从中心到左和右的连续图像分别使用时间匹配的特征。与计算复杂度为O(n3)的束平差方法不同,该方法只允许在相邻摄像机之间匹配特征,因此可以在O(n2)内执行。假设摄像机的恒定运动,因此,时间跟踪允许我们弥补摄像机之间缺失的重叠,因为来自中心视图的对象最终出现在左侧或右侧摄像机中。对于单目视觉里程计无法确定的尺度因子,通过基于已知的相对相机姿态和5个单目VO估计值构建方程系统来计算。该系统采用加权最小二乘方案求解,即使相机路径沿直线移动,也会保持过度定义。我们使用合成和真实的视频序列来评估结果系统,这些视频序列是为虚拟运动环境录制的。
{"title":"Trinocular visual odometry for divergent views with minimal overlap","authors":"Jaeheon Jeong, J. Mulligan, N. Correll","doi":"10.1109/WORV.2013.6521943","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521943","url":null,"abstract":"We present a visual odometry algorithm for trinocular systems with divergent views and minimal overlap. Whereas the bundle adjustment is the preferred method for multi-view visual odometry problems, it is infeasible if the number of features in the images-such as in HD videos-is large. We propose a divide and conquer approach, which reduces the trinocular visual odometry problem to five monocular visual odometry problems, one for each individual camera sequence and two more using features matched temporally from consecutive images from the center to the left and right cameras, respectively. Unlike the bundle adjustment method, whose computational complexity is O(n3), the proposed approach allows to match features only between neighboring cameras and can therefore be executed in O(n2). Assuming constant motion of the cameras, temporal tracking therefore allows us to make up for the missing overlap between cameras as objects from the center view eventually appear in the left or right camera. The scale factors that cannot be determined by monocular visual odometry are computed by constructing a system of equations based on known relative camera pose and the five monocular VO estimates. The system is solved using a weighted least squares scheme and remains over-defined even when the camera path follows a straight line. We evaluate the resulting system using synthetic and real video sequences that were recorded for a virtual exercise environment.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125862109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Dense range images from sparse point clouds using multi-scale processing 稀疏点云的密集距离图像采用多尺度处理
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521928
L. Do, Lingni Ma, P. D. De with
Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate high-accuracy dense range images from sparse point clouds to facilitate such applications. Our proposal addresses the problem of sparse data, mixed-pixels at the discontinuities and occlusions by combining multi-scale range images. The visual results show that our algorithm can create high-resolution dense range images with sharp discontinuities, while preserving the topology of objects even for environments that contain occlusions. To demonstrate the effectiveness of our approach, we propose an iterative perspective-to-point algorithm that aligns the edges between the color image and the range image from various viewpoints. The experimental results from 46 viewpoints show that the camera pose can be corrected when using high-accuracy dense range images, so that 3D reconstruction or 3D rendering can obtain a clearly higher quality.
基于视觉和深度/距离图像的多模态数据处理在城市建模、机器人导航等计算机视觉三维重建应用中具有重要意义。在本文中,我们从稀疏的点云生成高精度的密集距离图像,以促进此类应用。我们的方案通过结合多尺度距离图像来解决数据稀疏、不连续处混合像素和遮挡的问题。视觉结果表明,我们的算法可以创建具有明显不连续的高分辨率密集范围图像,同时即使在包含遮挡的环境中也能保持物体的拓扑结构。为了证明我们方法的有效性,我们提出了一种迭代的视角到点算法,该算法从不同的角度对齐彩色图像和距离图像之间的边缘。46个视点的实验结果表明,在使用高精度密集距离图像时,可以对相机姿态进行校正,从而使3D重建或3D渲染获得明显更高的质量。
{"title":"Dense range images from sparse point clouds using multi-scale processing","authors":"L. Do, Lingni Ma, P. D. De with","doi":"10.1109/WORV.2013.6521928","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521928","url":null,"abstract":"Multi-modal data processing based on visual and depth/range images has become relevant in computer vision for 3D reconstruction applications such as city modeling, robot navigation etc. In this paper, we generate high-accuracy dense range images from sparse point clouds to facilitate such applications. Our proposal addresses the problem of sparse data, mixed-pixels at the discontinuities and occlusions by combining multi-scale range images. The visual results show that our algorithm can create high-resolution dense range images with sharp discontinuities, while preserving the topology of objects even for environments that contain occlusions. To demonstrate the effectiveness of our approach, we propose an iterative perspective-to-point algorithm that aligns the edges between the color image and the range image from various viewpoints. The experimental results from 46 viewpoints show that the camera pose can be corrected when using high-accuracy dense range images, so that 3D reconstruction or 3D rendering can obtain a clearly higher quality.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126679441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Collision Risk Estimation based on Pearson's Correlation Coefficient 基于Pearson相关系数的实时碰撞风险估计
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521911
A. Miranda Neto, A. Victorino, I. Fantoni, J. V. Ferreira
The perception of the environment is a major issue in autonomous robots. In our previous works, we have proposed a visual perception system based on an automatic image discarding method as a simple solution to improve the performance of a real-time navigation system. In this paper, we take place in the obstacle avoidance context for vehicles in dynamic and unknown environments, and we propose a new method for Collision Risk Estimation based on Pearson's Correlation Coefficient (PCC). Applying the PCC to real-time CRE has not been done yet, making the concept unique. This paper provides a novel way of calculating collision risk and applying it for object avoidance using the PCC. This real-time perception system has been evaluated from real data obtained by our intelligent vehicle.
对环境的感知是自主机器人的一个主要问题。在我们之前的工作中,我们提出了一种基于自动图像丢弃方法的视觉感知系统,作为提高实时导航系统性能的简单解决方案。本文以车辆在动态和未知环境中的避障为研究对象,提出了一种基于Pearson相关系数(PCC)的碰撞风险估计方法。将PCC应用于实时CRE还没有完成,这使得这个概念很独特。本文提出了一种新的碰撞风险计算方法,并将其应用于物体避碰。这个实时感知系统已经通过我们的智能汽车获得的真实数据进行了评估。
{"title":"Real-time Collision Risk Estimation based on Pearson's Correlation Coefficient","authors":"A. Miranda Neto, A. Victorino, I. Fantoni, J. V. Ferreira","doi":"10.1109/WORV.2013.6521911","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521911","url":null,"abstract":"The perception of the environment is a major issue in autonomous robots. In our previous works, we have proposed a visual perception system based on an automatic image discarding method as a simple solution to improve the performance of a real-time navigation system. In this paper, we take place in the obstacle avoidance context for vehicles in dynamic and unknown environments, and we propose a new method for Collision Risk Estimation based on Pearson's Correlation Coefficient (PCC). Applying the PCC to real-time CRE has not been done yet, making the concept unique. This paper provides a novel way of calculating collision risk and applying it for object avoidance using the PCC. This real-time perception system has been evaluated from real data obtained by our intelligent vehicle.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127697747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A wireless robotic video laparo-endoscope for minimal invasive surgery 用于微创手术的无线机器人视频腹腔镜内窥镜
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521931
A. Alqassis, C. A. Castro, S. Smith, T. Ketterl, Yu Sun, P. P. Savage, R. Gitlin
This paper describes the design, prototype and deployment of a network of wireless Miniature Anchored Robotic Videoscopes for Expedited Laparoscopy (MARVEL). The MARVEL robotic Camera Modules (CMs) remove the need for a dedicated trocar port for an external laparoscope, additional incisions for surgical instrumentation, camera cabling for power, video and xenon light, and an assistant in the operating room to hold and position the laparoscope. The system includes: (1) Multiple MARVEL CMs that feature a wireless controlled pan/tilt camera platform, which provides a full hemisphere field of view inside the abdominal cavity from different angles, wirelessly controlled focus and a wireless illumination control system, (2) a Master Control Module (MCM) that provides a near-zero latency video wireless communications link, independent wireless control for multiple MARVEL CMs, digital zoom, manual focus, and a wireless Human-Machine Interface (HMI) that provides the surgeon with full control over all the functions of the CMs. In-vivo experiments on a porcine subject were carried out to test the performance of the system.
本文介绍了一种用于快速腹腔镜(MARVEL)的无线微型锚定机器人摄像机网络的设计、原型和部署。MARVEL机器人相机模块(CMs)无需外部腹腔镜专用套管针端口,无需手术器械的额外切口,无需电源线、视频和氙灯的摄像机电缆,也无需手术室的助手来固定和定位腹腔镜。该系统包括:(1)具有无线控制的平移/倾斜相机平台的多个MARVEL cm,可从不同角度提供腹腔内的全半球视野,无线控制对焦和无线照明控制系统;(2)主控制模块(MCM)提供近零延迟的视频无线通信链路,多个MARVEL cm的独立无线控制,数字变焦,手动对焦;无线人机界面(HMI),使外科医生能够完全控制CMs的所有功能。在猪身上进行了体内实验,以测试该系统的性能。
{"title":"A wireless robotic video laparo-endoscope for minimal invasive surgery","authors":"A. Alqassis, C. A. Castro, S. Smith, T. Ketterl, Yu Sun, P. P. Savage, R. Gitlin","doi":"10.1109/WORV.2013.6521931","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521931","url":null,"abstract":"This paper describes the design, prototype and deployment of a network of wireless Miniature Anchored Robotic Videoscopes for Expedited Laparoscopy (MARVEL). The MARVEL robotic Camera Modules (CMs) remove the need for a dedicated trocar port for an external laparoscope, additional incisions for surgical instrumentation, camera cabling for power, video and xenon light, and an assistant in the operating room to hold and position the laparoscope. The system includes: (1) Multiple MARVEL CMs that feature a wireless controlled pan/tilt camera platform, which provides a full hemisphere field of view inside the abdominal cavity from different angles, wirelessly controlled focus and a wireless illumination control system, (2) a Master Control Module (MCM) that provides a near-zero latency video wireless communications link, independent wireless control for multiple MARVEL CMs, digital zoom, manual focus, and a wireless Human-Machine Interface (HMI) that provides the surgeon with full control over all the functions of the CMs. In-vivo experiments on a porcine subject were carried out to test the performance of the system.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117272192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Near surface light source estimation from a single view image 单视图近地表光源估计
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521920
Wu Yuan Xie, C. Chung
Several techniques have been developed for estimating light source position in indoor or outdoor environment. However, those techniques assume that the light source can be approximated by a point, which cannot be applied safely to, for example, some case of Photometric Stereo reconstruction, when the light source is placed quite close to a small-size target, and hence the size of light source cannot be ignored. In this paper, we present a novel approach for estimating light source from single image of a scene that is illuminated by near surface light source. We propose to employ a shiny sphere and a Lambertion plate as light probe to locate light source position, where albedo variance of the Lambertian plate is used as the basis of the object function. We also illustrate the convexity of this object function and propose an efficient way to search the optimal value, i.e. source position. We test our calibration results on real images by means of Photometric Stereo reconstruction and image rendering, and both testing results show the accuracy of our estimation framework.
在室内或室外环境中,光源位置的估计已经发展了几种技术。然而,这些技术假设光源可以被一个点近似,这不能安全地应用于,例如,在光度立体重建的某些情况下,当光源放置在离小尺寸目标很近的地方,因此光源的大小不能被忽略。本文提出了一种从近表面光源照射的场景单幅图像中估计光源的新方法。我们提出用发光球和兰伯特板作为光探针来定位光源位置,其中兰伯特板的反照率方差作为目标函数的基础。我们还说明了该目标函数的凸性,并提出了一种搜索最优值(即源位置)的有效方法。通过光度立体重建和图像渲染两种方法对标定结果进行了实景测试,结果表明了估计框架的准确性。
{"title":"Near surface light source estimation from a single view image","authors":"Wu Yuan Xie, C. Chung","doi":"10.1109/WORV.2013.6521920","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521920","url":null,"abstract":"Several techniques have been developed for estimating light source position in indoor or outdoor environment. However, those techniques assume that the light source can be approximated by a point, which cannot be applied safely to, for example, some case of Photometric Stereo reconstruction, when the light source is placed quite close to a small-size target, and hence the size of light source cannot be ignored. In this paper, we present a novel approach for estimating light source from single image of a scene that is illuminated by near surface light source. We propose to employ a shiny sphere and a Lambertion plate as light probe to locate light source position, where albedo variance of the Lambertian plate is used as the basis of the object function. We also illustrate the convexity of this object function and propose an efficient way to search the optimal value, i.e. source position. We test our calibration results on real images by means of Photometric Stereo reconstruction and image rendering, and both testing results show the accuracy of our estimation framework.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124891960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Active view planing for human observation through a RGB-D camera 通过RGB-D相机进行人类观察的主动视图规划
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521923
Jianhao Du, W. Sheng
Human sensing is always an important topic for robotic applications. In this paper, we proposed an active view planning approach for human observation on a mobile robot platform with sensor data processing. The sensor adopted in our research is an inexpensive RGB-D camera. A new measure based on distance and orientation information is introduced to evaluate the quality of the viewpoint when the robot detects the human subject. The result shows that the robot can move to the best viewpoint based on the proposed approach.
人体传感一直是机器人应用的一个重要课题。在本文中,我们提出了一种基于传感器数据处理的移动机器人平台上人类观测的主动视图规划方法。在我们的研究中采用的传感器是一个便宜的RGB-D相机。提出了一种基于距离和方向信息的视点质量评价方法。结果表明,基于该方法,机器人可以移动到最佳视点。
{"title":"Active view planing for human observation through a RGB-D camera","authors":"Jianhao Du, W. Sheng","doi":"10.1109/WORV.2013.6521923","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521923","url":null,"abstract":"Human sensing is always an important topic for robotic applications. In this paper, we proposed an active view planning approach for human observation on a mobile robot platform with sensor data processing. The sensor adopted in our research is an inexpensive RGB-D camera. A new measure based on distance and orientation information is introduced to evaluate the quality of the viewpoint when the robot detects the human subject. The result shows that the robot can move to the best viewpoint based on the proposed approach.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122569733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Meal support system with spoon using laser range finder and manipulator 餐勺支撑系统采用激光测距仪和机械手
Pub Date : 2013-05-30 DOI: 10.2316/Journal.206.2016.3.206-4342
Yuichi Kobayashi, Yutaro Ohshima, T. Kaneko, A. Yamashita
This paper presents an autonomous meal support robot system that can handle non-rigid solid food. The robot system is equipped with a laser range finder (LRF) and a manipulator holding a spoon. The LRF measures the 3D coordinates of surface points belonging to food on a plate. Then the robot determines the position of food surface to scoop, and the manipulator moves according to the calculated trajectory. The system has an advantage that preparation of food cutting in bite-size is not required. The proposed scooping control was implemented and verified in experiment with two kinds of non-rigid solid foods. It was shown that the robot can scoop foods for the most part with high success rate.
本文提出了一种能够处理非刚性固体食物的自主餐托机器人系统。机器人系统配备了激光测距仪(LRF)和手持勺子的机械手。LRF测量盘子上食物表面点的三维坐标。然后机器人确定食物表面要舀取的位置,机械手根据计算出的轨迹运动。该系统的优点是不需要准备咀嚼大小的食物。并在两种非刚性固体食品中进行了实验验证。结果表明,该机器人可以舀取大部分食物,成功率很高。
{"title":"Meal support system with spoon using laser range finder and manipulator","authors":"Yuichi Kobayashi, Yutaro Ohshima, T. Kaneko, A. Yamashita","doi":"10.2316/Journal.206.2016.3.206-4342","DOIUrl":"https://doi.org/10.2316/Journal.206.2016.3.206-4342","url":null,"abstract":"This paper presents an autonomous meal support robot system that can handle non-rigid solid food. The robot system is equipped with a laser range finder (LRF) and a manipulator holding a spoon. The LRF measures the 3D coordinates of surface points belonging to food on a plate. Then the robot determines the position of food surface to scoop, and the manipulator moves according to the calculated trajectory. The system has an advantage that preparation of food cutting in bite-size is not required. The proposed scooping control was implemented and verified in experiment with two kinds of non-rigid solid foods. It was shown that the robot can scoop foods for the most part with high success rate.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122715861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Autonomous navigation and sign detector learning 自主导航和标志检测器学习
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521929
L. Ellis, N. Pugeault, K. Ofjall, J. Hedborg, R. Bowden, M. Felsberg
This paper presents an autonomous robotic system that incorporates novel Computer Vision, Machine Learning and Data Mining algorithms in order to learn to navigate and discover important visual entities. This is achieved within a Learning from Demonstration (LfD) framework, where policies are derived from example state-to-action mappings. For autonomous navigation, a mapping is learnt from holistic image features (GIST) onto control parameters using Random Forest regression. Additionally, visual entities (road signs e.g. STOP sign) that are strongly associated to autonomously discovered modes of action (e.g. stopping behaviour) are discovered through a novel Percept-Action Mining methodology. The resulting sign detector is learnt without any supervision (no image labeling or bounding box annotations are used). The complete system is demonstrated on a fully autonomous robotic platform, featuring a single camera mounted on a standard remote control car. The robot carries a PC laptop, that performs all the processing on board and in real-time.
本文提出了一种自主机器人系统,该系统结合了新颖的计算机视觉、机器学习和数据挖掘算法,以学习导航和发现重要的视觉实体。这是在从演示中学习(LfD)框架中实现的,其中策略派生自示例状态到操作映射。对于自主导航,使用随机森林回归从整体图像特征(GIST)学习到控制参数的映射。此外,通过一种新颖的感知-行动挖掘方法发现了与自主发现的行动模式(例如停止行为)密切相关的视觉实体(道路标志,例如停止标志)。生成的符号检测器是在没有任何监督的情况下学习的(不使用图像标记或边界框注释)。完整的系统在一个完全自主的机器人平台上进行了演示,其特点是安装在标准遥控车上的单个摄像头。该机器人携带一台PC笔记本电脑,可以在机上实时执行所有处理工作。
{"title":"Autonomous navigation and sign detector learning","authors":"L. Ellis, N. Pugeault, K. Ofjall, J. Hedborg, R. Bowden, M. Felsberg","doi":"10.1109/WORV.2013.6521929","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521929","url":null,"abstract":"This paper presents an autonomous robotic system that incorporates novel Computer Vision, Machine Learning and Data Mining algorithms in order to learn to navigate and discover important visual entities. This is achieved within a Learning from Demonstration (LfD) framework, where policies are derived from example state-to-action mappings. For autonomous navigation, a mapping is learnt from holistic image features (GIST) onto control parameters using Random Forest regression. Additionally, visual entities (road signs e.g. STOP sign) that are strongly associated to autonomously discovered modes of action (e.g. stopping behaviour) are discovered through a novel Percept-Action Mining methodology. The resulting sign detector is learnt without any supervision (no image labeling or bounding box annotations are used). The complete system is demonstrated on a fully autonomous robotic platform, featuring a single camera mounted on a standard remote control car. The robot carries a PC laptop, that performs all the processing on board and in real-time.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128730443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2013 IEEE Workshop on Robot Vision (WORV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1