首页 > 最新文献

Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97最新文献

英文 中文
Development of power assist system with individual compensation ratios for gravity and dynamic load 具有重力和动载单独补偿比的动力辅助系统的开发
Y. Hayashibara, K. Tanie, H. Arai, H. Tokashiki
This paper present the design concept of a power assist system. In such system, when the controller is designed without considering the maximum torque of the actuators, the actuators can sometimes become saturated, resulting in a loss of stability and manoeuvrability. We propose a method for dealing with this problem. The load force is divided into gravitational and dynamic component, and each component is attenuated by an individual ratio. These ratios are determined considering the maximum power of the operator and the actuators.
本文提出了动力辅助系统的设计思想。在这种系统中,当控制器的设计没有考虑执行器的最大扭矩时,执行器有时会变得饱和,导致稳定性和可操作性的丧失。我们提出了一个处理这个问题的方法。载荷力分为重力分量和动力分量,每个分量按单独的比例衰减。这些比率是考虑到操作人员和执行机构的最大功率而确定的。
{"title":"Development of power assist system with individual compensation ratios for gravity and dynamic load","authors":"Y. Hayashibara, K. Tanie, H. Arai, H. Tokashiki","doi":"10.1109/IROS.1997.655079","DOIUrl":"https://doi.org/10.1109/IROS.1997.655079","url":null,"abstract":"This paper present the design concept of a power assist system. In such system, when the controller is designed without considering the maximum torque of the actuators, the actuators can sometimes become saturated, resulting in a loss of stability and manoeuvrability. We propose a method for dealing with this problem. The load force is divided into gravitational and dynamic component, and each component is attenuated by an individual ratio. These ratios are determined considering the maximum power of the operator and the actuators.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125890666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Visual learning and object verification with illumination invariance 基于光照不变性的视觉学习与目标验证
K. Ohba, Yoichi Sato, K. Ikeuchi
This paper describes a method for recognizing partially occluded objects to realize a bin-picking task under different levels of illumination brightness by using the eigenspace analysis. In the proposed method, a measured color in the RGB color space is transformed into the HSV color space. Then, the hue of the measured color, which is invariant to change in illumination brightness and direction, is used for recognizing multiple objects under different levels of illumination conditions. The proposed method was applied to real images of multiple objects under different illumination conditions, and the objects were recognized and localized successfully.
本文提出了一种利用特征空间分析来识别部分遮挡物体,实现不同光照亮度下的拣筒任务的方法。在该方法中,将RGB色彩空间中的测量颜色转换为HSV色彩空间。然后,利用被测颜色的色相不受光照亮度和方向变化的影响,对不同光照水平下的多个目标进行识别。将该方法应用于不同光照条件下的多目标实景图像,成功实现了目标的识别和定位。
{"title":"Visual learning and object verification with illumination invariance","authors":"K. Ohba, Yoichi Sato, K. Ikeuchi","doi":"10.1109/IROS.1997.655139","DOIUrl":"https://doi.org/10.1109/IROS.1997.655139","url":null,"abstract":"This paper describes a method for recognizing partially occluded objects to realize a bin-picking task under different levels of illumination brightness by using the eigenspace analysis. In the proposed method, a measured color in the RGB color space is transformed into the HSV color space. Then, the hue of the measured color, which is invariant to change in illumination brightness and direction, is used for recognizing multiple objects under different levels of illumination conditions. The proposed method was applied to real images of multiple objects under different illumination conditions, and the objects were recognized and localized successfully.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123685916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Experiments on depth from magnification and blurring 放大和模糊对深度的影响实验
S. Ahn, Sukhan Lee, A. Meyyappan, P. Schenker
A new method of extracting depth from blurring and magnification of objects or local scene is presented. Assuming no active illumination, the images are taken at two camera positions of a small displacement, using a single standard camera with telecentric lens. Thus, the depth extraction method is simple in structure and efficient in computation. Fusing the two disparate sources of depth information, magnification and blurring, the proposed method provides more accurate and robust depth estimation. This paper describes the process of various experimentations performed to validate this concept and describes the present work that has been done in that field. The experimental result shows less than 1% error for an optimal depth range. The ultimate aim of this concept would be the construction of dense 3D maps of objects and real time continuous estimation of depth.
提出了一种从物体或局部场景的模糊和放大中提取深度的新方法。在没有主动照明的情况下,使用一个带有远心镜头的标准相机,在两个小位移的相机位置拍摄图像。因此,深度提取方法结构简单,计算效率高。该方法融合了放大和模糊两种不同的深度信息来源,提供了更准确和鲁棒的深度估计。本文描述了为验证这一概念而进行的各种实验的过程,并描述了目前在该领域所做的工作。实验结果表明,最佳深度范围误差小于1%。这个概念的最终目的是构建密集的物体3D地图和实时连续的深度估计。
{"title":"Experiments on depth from magnification and blurring","authors":"S. Ahn, Sukhan Lee, A. Meyyappan, P. Schenker","doi":"10.1109/IROS.1997.655092","DOIUrl":"https://doi.org/10.1109/IROS.1997.655092","url":null,"abstract":"A new method of extracting depth from blurring and magnification of objects or local scene is presented. Assuming no active illumination, the images are taken at two camera positions of a small displacement, using a single standard camera with telecentric lens. Thus, the depth extraction method is simple in structure and efficient in computation. Fusing the two disparate sources of depth information, magnification and blurring, the proposed method provides more accurate and robust depth estimation. This paper describes the process of various experimentations performed to validate this concept and describes the present work that has been done in that field. The experimental result shows less than 1% error for an optimal depth range. The ultimate aim of this concept would be the construction of dense 3D maps of objects and real time continuous estimation of depth.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115487767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Visual tracking of an end-effector by adaptive kinematic prediction 基于自适应运动预测的末端执行器视觉跟踪
A. Ruf, M. Tonko, R. Horaud, H. Nagel
Presents results of a model-based approach to visual tracking and pose estimation for a moving polyhedral tool in position-based visual servoing. This enables the control of a robot in look-and-move mode to achieve six degree of freedom goal configurations. Robust solutions of the correspondence problem-known as "matching" in the static case and "tracking" in the dynamic one-are crucial to the feasibility of such an approach in real-world environments. The object's motion along an arbitrary trajectory in space is tracked using visual pose estimates through consecutive images. Subsequent positions are predicted from robot joint angle measurements. To deal with inaccurate models and to relax calibration requirements, adaptive online calibration of the kinematic chain is proposed. The kinematic predictions enable unambiguous feature matching by a pessimistic algorithm. The performance of the suggested algorithms and the robustness of the proposed system are evaluated on real image sequences of a moving gripper. The results fulfill the requirements of visual servoing, and the computational demands are sufficiently low to allow for real-time implementation.
介绍了一种基于模型的基于位置的视觉伺服中运动多面体刀具的视觉跟踪和姿态估计方法。这使得机器人在观察和移动模式下的控制能够实现六个自由度的目标配置。对应问题(在静态情况下称为“匹配”,在动态情况下称为“跟踪”)的鲁棒解决方案对于这种方法在实际环境中的可行性至关重要。通过连续图像使用视觉姿态估计来跟踪物体沿空间任意轨迹的运动。从机器人关节角度测量中预测后续位置。为了解决模型不准确的问题,降低标定要求,提出了运动链的自适应在线标定方法。运动学预测通过悲观算法实现无二义性特征匹配。在实际图像序列上对所提算法的性能和系统的鲁棒性进行了评价。结果满足视觉伺服的要求,并且计算需求足够低,可以实时实现。
{"title":"Visual tracking of an end-effector by adaptive kinematic prediction","authors":"A. Ruf, M. Tonko, R. Horaud, H. Nagel","doi":"10.1109/IROS.1997.655115","DOIUrl":"https://doi.org/10.1109/IROS.1997.655115","url":null,"abstract":"Presents results of a model-based approach to visual tracking and pose estimation for a moving polyhedral tool in position-based visual servoing. This enables the control of a robot in look-and-move mode to achieve six degree of freedom goal configurations. Robust solutions of the correspondence problem-known as \"matching\" in the static case and \"tracking\" in the dynamic one-are crucial to the feasibility of such an approach in real-world environments. The object's motion along an arbitrary trajectory in space is tracked using visual pose estimates through consecutive images. Subsequent positions are predicted from robot joint angle measurements. To deal with inaccurate models and to relax calibration requirements, adaptive online calibration of the kinematic chain is proposed. The kinematic predictions enable unambiguous feature matching by a pessimistic algorithm. The performance of the suggested algorithms and the robustness of the proposed system are evaluated on real image sequences of a moving gripper. The results fulfill the requirements of visual servoing, and the computational demands are sufficiently low to allow for real-time implementation.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115570656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Behavioral expression by an expressive mobile robot-expressing vividness, mental distance, and attention 具有表达能力的移动机器人的行为表达——表达生动性、心理距离和注意力
H. Mizoguchi, Katsuyuki Takagi, Y. Hatamura, M. Nakao, Tomomasa Sato
This paper proposes an idea that it is possible for a mobile robot to display behavioral expressions by its motion. Behavioral expressions are expressions of vividness, sense of distance and attention. To confirm the idea concretely, an expressive mobile robot has been designed and implemented to display the behavioral expressions. Utilizing the robot, psychological experiments have been conducted to evaluate impressions on three items: 1) velocity changing pattern, 2) distance between human and the robot, and 3) various poses. The experimental results indicate: firstly, there is a proper speed pattern for expression of vividness, the pattern being triangular along the time axis; secondly, there is a proper distance range between human and robot for expression of mental distance between them, its average value being about 2.5 m; thirdly, when the robot faces the human, the impression of attention is increased where the robot puts its head on one side or raises its hands. The implemented expressive mobile robot is puppy-sized and has 2 DOFs for motion, 2 DOFs for two swingable arms and 3 DOFs for pan, tilt and yaw of its head. The experimental results prove feasibility of the proposed idea of the behavioral expression by the robot.
本文提出了一种移动机器人可以通过运动来表现行为表情的想法。行为表达是生动性、距离感和注意力的表达。为了具体证实这一想法,设计并实现了一个具有表达能力的移动机器人来展示行为表情。利用机器人,进行了心理实验,以评估对三个项目的印象:1)速度变化模式,2)人与机器人之间的距离,以及3)各种姿势。实验结果表明:首先,存在一种适合表达生动性的速度模式,该模式沿时间轴呈三角形;其次,人与机器人之间存在一个适当的距离范围来表达人与机器人之间的心理距离,其平均值约为2.5 m;第三,当机器人面对人类时,当机器人把头侧向一边或举手时,会增加注意力的印象。所实现的富有表现力的移动机器人是小狗大小,有2个自由度用于运动,2个自由度用于两个可摆动的手臂,3个自由度用于平移,倾斜和头部偏航。实验结果证明了所提出的机器人行为表达思想的可行性。
{"title":"Behavioral expression by an expressive mobile robot-expressing vividness, mental distance, and attention","authors":"H. Mizoguchi, Katsuyuki Takagi, Y. Hatamura, M. Nakao, Tomomasa Sato","doi":"10.1109/IROS.1997.649070","DOIUrl":"https://doi.org/10.1109/IROS.1997.649070","url":null,"abstract":"This paper proposes an idea that it is possible for a mobile robot to display behavioral expressions by its motion. Behavioral expressions are expressions of vividness, sense of distance and attention. To confirm the idea concretely, an expressive mobile robot has been designed and implemented to display the behavioral expressions. Utilizing the robot, psychological experiments have been conducted to evaluate impressions on three items: 1) velocity changing pattern, 2) distance between human and the robot, and 3) various poses. The experimental results indicate: firstly, there is a proper speed pattern for expression of vividness, the pattern being triangular along the time axis; secondly, there is a proper distance range between human and robot for expression of mental distance between them, its average value being about 2.5 m; thirdly, when the robot faces the human, the impression of attention is increased where the robot puts its head on one side or raises its hands. The implemented expressive mobile robot is puppy-sized and has 2 DOFs for motion, 2 DOFs for two swingable arms and 3 DOFs for pan, tilt and yaw of its head. The experimental results prove feasibility of the proposed idea of the behavioral expression by the robot.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116084906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
G/sup en/oM: a tool for the specification and the implementation of operating modules in a distributed robot architecture G/sup /oM:用于规范和实现分布式机器人体系结构中的操作模块的工具
S. Fleury, M. Herrb, R. Chatila
This paper presents a general methodology for the specification and the integration of functional modules in a distributed reactive robot architecture. The approach is based on a hybrid architecture basically composed of two levels: a lower distributed functional level controlled by a centralized decisional level. Due to this methodology, synchronous or asynchronous operating capabilities (servo-control, data processing, event monitoring) can be easily added to the functional level. They are encapsulated into modules, built according to a generic model, that are seen by the decisional level as homogeneous, programmable, reactive and robust communicant services. Each module is simply described with a specific language and is automatically produced by a generator of modules (G/sup en/oM) according to the generic model. G/sup en/oM also produces an interactive test program and interface libraries to control the module and to read the resulting data, which allow one to directly integrate the module into the architecture.
本文提出了分布式反应式机器人体系结构中功能模块的规范和集成的一般方法。该方法基于混合体系结构,基本上由两个级别组成:由集中决策级别控制的较低的分布式功能级别。由于这种方法,同步或异步操作能力(伺服控制、数据处理、事件监控)可以很容易地添加到功能级别。它们被封装到模块中,根据通用模型构建,决策层将其视为同构的、可编程的、反应性的和健壮的通信服务。每个模块都用特定的语言简单地描述,并由模块生成器(G/sup /oM)根据通用模型自动生成。G/sup /oM还生成了一个交互式测试程序和接口库,用于控制模块和读取结果数据,从而允许直接将模块集成到体系结构中。
{"title":"G/sup en/oM: a tool for the specification and the implementation of operating modules in a distributed robot architecture","authors":"S. Fleury, M. Herrb, R. Chatila","doi":"10.1109/IROS.1997.655108","DOIUrl":"https://doi.org/10.1109/IROS.1997.655108","url":null,"abstract":"This paper presents a general methodology for the specification and the integration of functional modules in a distributed reactive robot architecture. The approach is based on a hybrid architecture basically composed of two levels: a lower distributed functional level controlled by a centralized decisional level. Due to this methodology, synchronous or asynchronous operating capabilities (servo-control, data processing, event monitoring) can be easily added to the functional level. They are encapsulated into modules, built according to a generic model, that are seen by the decisional level as homogeneous, programmable, reactive and robust communicant services. Each module is simply described with a specific language and is automatically produced by a generator of modules (G/sup en/oM) according to the generic model. G/sup en/oM also produces an interactive test program and interface libraries to control the module and to read the resulting data, which allow one to directly integrate the module into the architecture.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116186577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 162
Model-based object tracking in cluttered scenes with occlusions 具有遮挡的杂乱场景中基于模型的对象跟踪
F. Jurie
We propose an efficient method for tracking 3D modelled objects in cluttered scenes. Rather than tracking objects in the image, our approach relies on the object recognition aspect of tracking. Candidate matches between image and model features define volumes in the space of transformations. The volumes of the pose space satisfying the maximum number of correspondences are those that best align the model with the image. Object motion defines a trajectory in the pose space. We give some results showing that the presented method allows tracking of objects even when they are totally occluded for a short while, without supposing any motion model and with a low computational cost (below 200 ms per frame on a basic workstation). Furthermore, this algorithm can also be used to initialize the tracking.
我们提出了一种在混乱场景中跟踪三维建模对象的有效方法。我们的方法不是跟踪图像中的对象,而是依赖于跟踪的对象识别方面。图像和模型特征之间的候选匹配定义了变换空间中的体积。满足最大对应数的姿态空间的体积是那些最好地将模型与图像对齐的体积。物体运动在姿态空间中定义了一个轨迹。我们给出了一些结果,表明所提出的方法即使在物体被完全遮挡的情况下也可以跟踪物体,而不需要假设任何运动模型,并且计算成本低(在基本工作站上每帧低于200毫秒)。此外,该算法还可用于初始化跟踪。
{"title":"Model-based object tracking in cluttered scenes with occlusions","authors":"F. Jurie","doi":"10.1109/IROS.1997.655114","DOIUrl":"https://doi.org/10.1109/IROS.1997.655114","url":null,"abstract":"We propose an efficient method for tracking 3D modelled objects in cluttered scenes. Rather than tracking objects in the image, our approach relies on the object recognition aspect of tracking. Candidate matches between image and model features define volumes in the space of transformations. The volumes of the pose space satisfying the maximum number of correspondences are those that best align the model with the image. Object motion defines a trajectory in the pose space. We give some results showing that the presented method allows tracking of objects even when they are totally occluded for a short while, without supposing any motion model and with a low computational cost (below 200 ms per frame on a basic workstation). Furthermore, this algorithm can also be used to initialize the tracking.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122703838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Acquisition of statistical motion patterns in dynamic environments and their application to mobile robot motion planning 动态环境中统计运动模式的获取及其在移动机器人运动规划中的应用
E. Kruse, R. Gutsche, F. Wahl
In recent papers we (1996, 1997) have proposed a new path planning approach for mobile robots: statistical motion planning with respect to typical obstacle behavior in order to improve pre-planning in dynamic environments. In this paper, we present our experimental system: in a real environment, cameras observe the workspace in order to detect obstacle motions and to derive statistical data. We have developed new techniques based on stochastic trajectories to model obstacle behavior. Collision probabilities are calculated for polygonal objects moving on piecewise linear trajectories. The statistical data can be applied directly, thus the entire chain from raw sensor data to a stochastic assessment of robot trajectories is closed. Finally, some new work regarding different applications of statistical motion planning is outlined, including road-map approaches for pre-planning, expected time to reach the goal, and reactive behaviors.
在最近的论文中,我们(1996,1997)提出了一种新的移动机器人路径规划方法:基于典型障碍行为的统计运动规划,以改善动态环境中的预规划。在本文中,我们提出了我们的实验系统:在一个真实的环境中,相机观察工作空间,以检测障碍物运动并得出统计数据。我们开发了基于随机轨迹的新技术来模拟障碍物行为。计算了沿分段线性轨迹运动的多边形物体的碰撞概率。统计数据可以直接应用,从而封闭了从原始传感器数据到机器人轨迹随机评估的整个链。最后,概述了统计运动规划的不同应用方面的一些新工作,包括预先规划的路线图方法、达到目标的预期时间和反应行为。
{"title":"Acquisition of statistical motion patterns in dynamic environments and their application to mobile robot motion planning","authors":"E. Kruse, R. Gutsche, F. Wahl","doi":"10.1109/IROS.1997.655089","DOIUrl":"https://doi.org/10.1109/IROS.1997.655089","url":null,"abstract":"In recent papers we (1996, 1997) have proposed a new path planning approach for mobile robots: statistical motion planning with respect to typical obstacle behavior in order to improve pre-planning in dynamic environments. In this paper, we present our experimental system: in a real environment, cameras observe the workspace in order to detect obstacle motions and to derive statistical data. We have developed new techniques based on stochastic trajectories to model obstacle behavior. Collision probabilities are calculated for polygonal objects moving on piecewise linear trajectories. The statistical data can be applied directly, thus the entire chain from raw sensor data to a stochastic assessment of robot trajectories is closed. Finally, some new work regarding different applications of statistical motion planning is outlined, including road-map approaches for pre-planning, expected time to reach the goal, and reactive behaviors.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122729656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Autonomous navigation in ill-structured outdoor environment 非结构化室外环境下的自主导航
Josep Fernández, A. Casals
Presents a methodology for autonomous navigation in weakly structured outdoor environments such as dirt roads or mountain ways. The main problem to solve is the detection of an ill-defined structure-the way-and the obstacles in the scene, when working in variable lighting conditions. First, we discuss the road description requirements to perform autonomous navigation in this kind of environment and propose a simple sensors configuration based on vision. A simplified road description is generated from the analysis of a sequence of color images, considering the constraints imposed by the model of ill-structured roads. This environment description is done in three steps: region segmentation, obstacle detection and coherence evaluation.
提出了一种在弱结构室外环境(如土路或山路)中自主导航的方法。要解决的主要问题是,当在可变光照条件下工作时,检测不明确的结构——道路和场景中的障碍物。首先,我们讨论了在这种环境下进行自主导航的道路描述要求,并提出了一种基于视觉的简单传感器配置。考虑到非结构道路模型所施加的约束,通过分析一系列彩色图像生成简化的道路描述。该环境描述分为三个步骤:区域分割、障碍物检测和相干性评估。
{"title":"Autonomous navigation in ill-structured outdoor environment","authors":"Josep Fernández, A. Casals","doi":"10.1109/IROS.1997.649093","DOIUrl":"https://doi.org/10.1109/IROS.1997.649093","url":null,"abstract":"Presents a methodology for autonomous navigation in weakly structured outdoor environments such as dirt roads or mountain ways. The main problem to solve is the detection of an ill-defined structure-the way-and the obstacles in the scene, when working in variable lighting conditions. First, we discuss the road description requirements to perform autonomous navigation in this kind of environment and propose a simple sensors configuration based on vision. A simplified road description is generated from the analysis of a sequence of color images, considering the constraints imposed by the model of ill-structured roads. This environment description is done in three steps: region segmentation, obstacle detection and coherence evaluation.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122563080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Camera calibration from multiple views of a 2D object, using a global nonlinear minimization method 摄像机标定从多个视图的一个二维目标,使用全局非线性最小化方法
M. Devy, V. Garric, J. Orteu
An important task in most 3D vision systems is camera calibration. Many camera models, numerical methods and experimental set-ups have been proposed in the literature to solve the calibration problem. We have analysed and tried many methods, and we conclude that the main problems lie in the choice of the numerical methods and on the calibration object. We propose in this paper a method which is based on a camera model that incorporates lens distortion, and involves a nonlinear minimization technique which can be performed using multiple views of a single 2D object and subpixel feature extraction. We present an application for which only a 2D calibration object can be used.
在大多数3D视觉系统中,摄像机标定是一项重要的任务。文献中提出了许多相机模型、数值方法和实验设置来解决标定问题。我们对多种方法进行了分析和试验,认为主要问题在于数值方法的选择和标定对象的选择。本文提出了一种基于包含镜头畸变的相机模型的方法,该方法涉及一种非线性最小化技术,该技术可以使用单个2D物体的多个视图和亚像素特征提取来执行。我们提出了一个应用程序,其中只有一个二维的校准对象可以使用。
{"title":"Camera calibration from multiple views of a 2D object, using a global nonlinear minimization method","authors":"M. Devy, V. Garric, J. Orteu","doi":"10.1109/IROS.1997.656569","DOIUrl":"https://doi.org/10.1109/IROS.1997.656569","url":null,"abstract":"An important task in most 3D vision systems is camera calibration. Many camera models, numerical methods and experimental set-ups have been proposed in the literature to solve the calibration problem. We have analysed and tried many methods, and we conclude that the main problems lie in the choice of the numerical methods and on the calibration object. We propose in this paper a method which is based on a camera model that incorporates lens distortion, and involves a nonlinear minimization technique which can be performed using multiple views of a single 2D object and subpixel feature extraction. We present an application for which only a 2D calibration object can be used.","PeriodicalId":408848,"journal":{"name":"Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97","volume":"213 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1