Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507430
Mazda Ahmadi, P. Stone
As mobile robots become increasingly autonomous over extended periods of time, opportunities arise for their use on repetitive tasks. We define and implement behaviors for a class of such tasks that we call continuous area sweeping tasks. A continuous area sweeping task is one in which a robot (or group of robots) must repeatedly visit all points in a fixed area, possibly with non-uniform frequency, as specified by a task-dependent cost function. Examples of problems that need continuous area sweeping are trash removal in a large building and routine surveillance. We present a formulation for this problem and an initial algorithm to address it. The approach is analyzed analytically and is fully implemented and tested, both in simulation and on a physical robot
{"title":"Continuous area sweeping: a task definition and initial approach","authors":"Mazda Ahmadi, P. Stone","doi":"10.1109/ICAR.2005.1507430","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507430","url":null,"abstract":"As mobile robots become increasingly autonomous over extended periods of time, opportunities arise for their use on repetitive tasks. We define and implement behaviors for a class of such tasks that we call continuous area sweeping tasks. A continuous area sweeping task is one in which a robot (or group of robots) must repeatedly visit all points in a fixed area, possibly with non-uniform frequency, as specified by a task-dependent cost function. Examples of problems that need continuous area sweeping are trash removal in a large building and routine surveillance. We present a formulation for this problem and an initial algorithm to address it. The approach is analyzed analytically and is fully implemented and tested, both in simulation and on a physical robot","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122212968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507428
P. Oh, M. Joyce, J. Gallagher
When disasters and crises arise visual information needs to be rapidly gathered and assessed in order to assist rescue workers and emergency personnel. Often such situations are life-threatening and people cannot safely obtain such information. Disasters in urban areas are particularly taxing. Structural collapse, damaged staircases and the loss communication infrastructures aggravate rescue efforts. Robots, equipped with camera, can be employed to visually capture situational awareness. As such, the focus of our work is designing a backpackable aerial robot that can hover-and-stare. Such a robot would ascend, peer through windows, and transmits video to an operator. This paper presents a backpackable tandem-rotor prototype that can carry a wireless camera
{"title":"Designing an aerial robot for hover-and-stare surveillance","authors":"P. Oh, M. Joyce, J. Gallagher","doi":"10.1109/ICAR.2005.1507428","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507428","url":null,"abstract":"When disasters and crises arise visual information needs to be rapidly gathered and assessed in order to assist rescue workers and emergency personnel. Often such situations are life-threatening and people cannot safely obtain such information. Disasters in urban areas are particularly taxing. Structural collapse, damaged staircases and the loss communication infrastructures aggravate rescue efforts. Robots, equipped with camera, can be employed to visually capture situational awareness. As such, the focus of our work is designing a backpackable aerial robot that can hover-and-stare. Such a robot would ascend, peer through windows, and transmits video to an operator. This paper presents a backpackable tandem-rotor prototype that can carry a wireless camera","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129949000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507400
B. Kahl, D. Henrich
In this paper we present a new approach to simulate deformable linear objects (DLOs) for use in a haptic feedback system. Our goal is to optimize efficient computation and preciseness. The presented approach is inspired from the well known method of "finite elements", but avoids a exact physical model
{"title":"Manipulation of deformable linear objects: Force-based simulation approach for haptic feedback","authors":"B. Kahl, D. Henrich","doi":"10.1109/ICAR.2005.1507400","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507400","url":null,"abstract":"In this paper we present a new approach to simulate deformable linear objects (DLOs) for use in a haptic feedback system. Our goal is to optimize efficient computation and preciseness. The presented approach is inspired from the well known method of \"finite elements\", but avoids a exact physical model","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131254068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507392
T. Suzuki, Y. Ebihara, K. Shintani
In this paper, casting and winding dynamics of hyper-flexible manipulator is analyzed. Casting and winding with hyper-flexible elements like strings, ropes or wires should be a useful operation to capture a distant target object A multi-link system connected by passive non-elastic joints is employed as a discrete dynamic model of hyper-flexible systems. Casting and winding dynamics is analyzed with the multi-link model. Casting and winding motion can be divided in several phases as casting, contacting, winding and capturing. This paper focuses on the casting motion suitable for winding onto a round target object. Simulations are executed to clarify a suitable casting motion to realize a good and firm winding. Several criteria as contact position, contact velocity, slack length and looseness on the target are employed to evaluate the winding results
{"title":"Dynamic analysis of casting and winding with hyper-flexible manipulator","authors":"T. Suzuki, Y. Ebihara, K. Shintani","doi":"10.1109/ICAR.2005.1507392","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507392","url":null,"abstract":"In this paper, casting and winding dynamics of hyper-flexible manipulator is analyzed. Casting and winding with hyper-flexible elements like strings, ropes or wires should be a useful operation to capture a distant target object A multi-link system connected by passive non-elastic joints is employed as a discrete dynamic model of hyper-flexible systems. Casting and winding dynamics is analyzed with the multi-link model. Casting and winding motion can be divided in several phases as casting, contacting, winding and capturing. This paper focuses on the casting motion suitable for winding onto a round target object. Simulations are executed to clarify a suitable casting motion to realize a good and firm winding. Several criteria as contact position, contact velocity, slack length and looseness on the target are employed to evaluate the winding results","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133535278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507511
A. Bouloubasis, J. McKee
In this paper we present experimental results for the dual robot transport of an extended payload. Two robotic rovers that were designed specifically for the extended payload transport task are described. Each rover incorporates a 4-DOF robot arm incorporating three active joints (one of which is a gripper), a passive wrist, and a mobile base which employs a rocker-bogie design. A set of behaviors has been developed to support the performance of the task, integrating simple sensing with controls. We describe the behaviors and their integration within the overall task structure. The experimental results presented focus on the manipulation elements of the task, but incorporate a complete cycle of pick-up, traversal, and putdown
{"title":"Cooperative transport of extended payloads","authors":"A. Bouloubasis, J. McKee","doi":"10.1109/ICAR.2005.1507511","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507511","url":null,"abstract":"In this paper we present experimental results for the dual robot transport of an extended payload. Two robotic rovers that were designed specifically for the extended payload transport task are described. Each rover incorporates a 4-DOF robot arm incorporating three active joints (one of which is a gripper), a passive wrist, and a mobile base which employs a rocker-bogie design. A set of behaviors has been developed to support the performance of the task, integrating simple sensing with controls. We describe the behaviors and their integration within the overall task structure. The experimental results presented focus on the manipulation elements of the task, but incorporate a complete cycle of pick-up, traversal, and putdown","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114851553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507491
Sarah Brown, Mauricio Zuluaga, Yinan Zhang, R. Vaughan
Spatial interference can reduce the effectiveness of teams of mobile robots. We examine a team of robots with no centralized control performing a transportation task, in which robots frequently interfere with each other. The robots must work in the same space, so territorial methods are not appropriate. Previously we have shown that a stereotyped competition, inspired by aggressive displays in various animal species, can reduce interference and improve overall system performance. However, none of the methods previously devised for selecting a robot's 'aggression level' performed better than selecting aggression at random. This paper describes a new, principled approach to selecting an aggression level, based on robot's investment in a task. Simulation experiments with teams of six robots in an office-type environment show that, under certain conditions, this method can significantly improve system performance compared to a random competition and a noncompetitive control experiment. Finally, we discuss the benefits and limitations of such a scheme with respect to the specific environment
{"title":"Rational aggressive behaviour reduces interference in a mobile robot team","authors":"Sarah Brown, Mauricio Zuluaga, Yinan Zhang, R. Vaughan","doi":"10.1109/ICAR.2005.1507491","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507491","url":null,"abstract":"Spatial interference can reduce the effectiveness of teams of mobile robots. We examine a team of robots with no centralized control performing a transportation task, in which robots frequently interfere with each other. The robots must work in the same space, so territorial methods are not appropriate. Previously we have shown that a stereotyped competition, inspired by aggressive displays in various animal species, can reduce interference and improve overall system performance. However, none of the methods previously devised for selecting a robot's 'aggression level' performed better than selecting aggression at random. This paper describes a new, principled approach to selecting an aggression level, based on robot's investment in a task. Simulation experiments with teams of six robots in an office-type environment show that, under certain conditions, this method can significantly improve system performance compared to a random competition and a noncompetitive control experiment. Finally, we discuss the benefits and limitations of such a scheme with respect to the specific environment","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"487 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132788875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507506
V. Núñez, S. Drakunov, N. Nadjar-Gauthier, J. Cadiou
In this paper we will present a new - application oriented - control method for the biped jump. We shall be restricting our study to the vertical jump with both feet together and legs placed exactly side by side, and with the robot moving in the sagittal plane. The control method is based on the sliding mode technique. We will end up with a simple control law which impose the trajectory following for the distance between the foot and the total center of mass and which maintains the CoM above the feet during the whole jump (take-off, flight, land-in). This allows to control precisely the jump and to reduce the impact at land-in
{"title":"Control strategy for planar vertical jump","authors":"V. Núñez, S. Drakunov, N. Nadjar-Gauthier, J. Cadiou","doi":"10.1109/ICAR.2005.1507506","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507506","url":null,"abstract":"In this paper we will present a new - application oriented - control method for the biped jump. We shall be restricting our study to the vertical jump with both feet together and legs placed exactly side by side, and with the robot moving in the sagittal plane. The control method is based on the sliding mode technique. We will end up with a simple control law which impose the trajectory following for the distance between the foot and the total center of mass and which maintains the CoM above the feet during the whole jump (take-off, flight, land-in). This allows to control precisely the jump and to reduce the impact at land-in","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"331 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133014559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507473
P. Guha, Dibyendu Palai, Dip Goswami, A. Mukerjee
Active video surveillance systems provide challenging research issues in the interface of computer vision, pattern recognition and control system analysis. A significant part of such systems is devoted toward active camera control for efficient target tracking. DynaTracker is a pan-tilt device based active camera system for maintaining continuous track of the moving target, while keeping the same at a pre-specified region (typically, the center) of the image. The significant contributions in this work are the use of mean-shift algorithm for visual tracking and the derivation of the error dynamics for a proportional-integral control action. The stability analysis and optimal controller gain selections are performed from the simulation studies of the derived error dynamics. Simulation predictions are also validated from the results of practical experimentations. The present implementation of DynaTracker performs on a standard Pentium IV PC at an average speed of 10 frames per second while operating on color images of 320times240 resolution
主动视频监控系统在计算机视觉接口、模式识别和控制系统分析等方面提出了具有挑战性的研究课题。该系统的一个重要组成部分是主动摄像机控制,以实现有效的目标跟踪。DynaTracker是一种基于pan-tilt设备的有源相机系统,用于保持移动目标的连续跟踪,同时在图像的预先指定区域(通常是中心)保持相同。这项工作的重要贡献是使用均值移位算法进行视觉跟踪和推导比例积分控制动作的误差动力学。通过误差动力学的仿真研究,进行了系统的稳定性分析和最优控制器增益选择。实际实验结果也验证了仿真预测的正确性。目前DynaTracker的实现在标准的Pentium IV PC上以每秒10帧的平均速度运行,同时在320x240分辨率的彩色图像上运行
{"title":"DynaTracker: Target tracking in active video surveillance systems","authors":"P. Guha, Dibyendu Palai, Dip Goswami, A. Mukerjee","doi":"10.1109/ICAR.2005.1507473","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507473","url":null,"abstract":"Active video surveillance systems provide challenging research issues in the interface of computer vision, pattern recognition and control system analysis. A significant part of such systems is devoted toward active camera control for efficient target tracking. DynaTracker is a pan-tilt device based active camera system for maintaining continuous track of the moving target, while keeping the same at a pre-specified region (typically, the center) of the image. The significant contributions in this work are the use of mean-shift algorithm for visual tracking and the derivation of the error dynamics for a proportional-integral control action. The stability analysis and optimal controller gain selections are performed from the simulation studies of the derived error dynamics. Simulation predictions are also validated from the results of practical experimentations. The present implementation of DynaTracker performs on a standard Pentium IV PC at an average speed of 10 frames per second while operating on color images of 320times240 resolution","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132406068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507432
S. Ekvall, D. Kragic
Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper, we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, programming by demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it
{"title":"Integrating object and grasp recognition for dynamic scene interpretation","authors":"S. Ekvall, D. Kragic","doi":"10.1109/ICAR.2005.1507432","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507432","url":null,"abstract":"Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper, we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, programming by demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128895471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-07-18DOI: 10.1109/ICAR.2005.1507509
Y. Kunii, M. Moriyama, S. Nagatsuka, Y. Ishimaru
In this paper, we discuss tele-driving method for a long driving of a mobile robot in the natural field and experimental results are shown. An operator uses virtual world simulator to make command path. The virtual environment of this simulator is constructed with measurement data from a rover. We have, however, a communication delay between the control station and the remote site. So, there is the difference between old map data which operator used for path planning and new data, which a rover is tracking on. The operator's path command has less reliability to avoid obstacles and to reach the goal. Therefore, to make high reliability of operator's command, we propose command-data compensation (CDC), which compensates this difference as the distortion of the environmental map
{"title":"Command path compensation algorithm for rover tele-driving system and its evaluation","authors":"Y. Kunii, M. Moriyama, S. Nagatsuka, Y. Ishimaru","doi":"10.1109/ICAR.2005.1507509","DOIUrl":"https://doi.org/10.1109/ICAR.2005.1507509","url":null,"abstract":"In this paper, we discuss tele-driving method for a long driving of a mobile robot in the natural field and experimental results are shown. An operator uses virtual world simulator to make command path. The virtual environment of this simulator is constructed with measurement data from a rover. We have, however, a communication delay between the control station and the remote site. So, there is the difference between old map data which operator used for path planning and new data, which a rover is tracking on. The operator's path command has less reliability to avoid obstacles and to reach the goal. Therefore, to make high reliability of operator's command, we propose command-data compensation (CDC), which compensates this difference as the distortion of the environmental map","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117272235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}