Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219683
Chen Gao, K. Panetta, S. Agaian
Image corners encapsulate gradient changes in multiple directions. Therefore, corners are considered as efficient features for use in robotic navigation algorithms. Template based corner detection has a low computational complexity and is straightforward to implement. With the appropriate design of templates, satisfactory detection accuracy can also be achieved. In this paper, we introduce two new template based corner detection algorithms to be used to assist robot vision: the matching based corner detection, namely, MBCD; and the correlation based corner detection, namely, CBCD. These two approaches outperform existing template based approaches in the means that they reduce detection of spurious corners by considering ideal corners with at least two-pixel length on the corner arm directions. Experimental results show that the proposed algorithms detect essential corners for synthetic images and natural images satisfactorily according to human visual perception. We also examine the robustness of the two corner detection approaches in terms of the average repeatability and localization error. Since our approaches are computationally efficient, it makes these template based corner detection algorithms suitable for real time support in robotic applications. Comparisons with existing corner detection algorithms are also presented.
{"title":"Robust template based corner detection algorithms for robotic vision","authors":"Chen Gao, K. Panetta, S. Agaian","doi":"10.1109/TePRA.2015.7219683","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219683","url":null,"abstract":"Image corners encapsulate gradient changes in multiple directions. Therefore, corners are considered as efficient features for use in robotic navigation algorithms. Template based corner detection has a low computational complexity and is straightforward to implement. With the appropriate design of templates, satisfactory detection accuracy can also be achieved. In this paper, we introduce two new template based corner detection algorithms to be used to assist robot vision: the matching based corner detection, namely, MBCD; and the correlation based corner detection, namely, CBCD. These two approaches outperform existing template based approaches in the means that they reduce detection of spurious corners by considering ideal corners with at least two-pixel length on the corner arm directions. Experimental results show that the proposed algorithms detect essential corners for synthetic images and natural images satisfactorily according to human visual perception. We also examine the robustness of the two corner detection approaches in terms of the average repeatability and localization error. Since our approaches are computationally efficient, it makes these template based corner detection algorithms suitable for real time support in robotic applications. Comparisons with existing corner detection algorithms are also presented.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122122821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219690
S. V. Delden, Grace Chenevert, John W. Burris
This paper reports on an interactive Human-Robot system which uses the Microsoft Kinect to track a human finger tip. By pointing to target locations in the robotic manipulator's workarea, a user can intuitively and rapidly develop certain types of industrial robotic applications. The efficacy of the system is limited by the Kinect's ability to precisely identify the finger tip in 3D space. Such Human-Computer Interactive devices are typically used to track large movements/gestures by a user in gaming and other applications. Empirical results are reported which show that the approach is able to precisely identify 3D location data and that the accuracy of the system is limited to the width of the user's finger.
{"title":"Finger tip tracking for manipulator jogging using the kinect","authors":"S. V. Delden, Grace Chenevert, John W. Burris","doi":"10.1109/TePRA.2015.7219690","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219690","url":null,"abstract":"This paper reports on an interactive Human-Robot system which uses the Microsoft Kinect to track a human finger tip. By pointing to target locations in the robotic manipulator's workarea, a user can intuitively and rapidly develop certain types of industrial robotic applications. The efficacy of the system is limited by the Kinect's ability to precisely identify the finger tip in 3D space. Such Human-Computer Interactive devices are typically used to track large movements/gestures by a user in gaming and other applications. Empirical results are reported which show that the approach is able to precisely identify 3D location data and that the accuracy of the system is limited to the width of the user's finger.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128746951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219671
H. Khan, S. Kitano, M. Frigerio, Marco Camurri, Victor Barasuol, R. Featherstone, D. Caldwell, C. Semini
This paper presents the development of the lightweight hydraulic quadruped robot MiniHyQ. To the authors' best knowledge, MiniHyQ is the lightest and smallest hydraulic quadruped robot that has been built so far. MiniHyQ is a fully torque controlled robot. It has reconfigurable leg configurations. It has wide joint range of motion and an onboard compact power pack. The robot has almost the same leg length as the previous robot (HyQ [1], built by our group), but its link segment lengths are 15% less in flex configuration, due to the special isogram knee joint mechanism. Its weight is only 35kg (24kg with an offboard pump unit), which makes it portable by one person. To achieve this lightweight, miniature hydraulic actuators were carefully selected, allowing us to reduce the required pump size inside the torso. By using a hydraulic rotary actuator for the hip and linear actuators with isogram mechanism for the knee joint, a wider range of motion is achieved, allowing a self-righting motion. For the design validation and hardware testing, series of experiments are conducted on MiniHyQ single leg.
{"title":"Development of the lightweight hydraulic quadruped robot — MiniHyQ","authors":"H. Khan, S. Kitano, M. Frigerio, Marco Camurri, Victor Barasuol, R. Featherstone, D. Caldwell, C. Semini","doi":"10.1109/TePRA.2015.7219671","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219671","url":null,"abstract":"This paper presents the development of the lightweight hydraulic quadruped robot MiniHyQ. To the authors' best knowledge, MiniHyQ is the lightest and smallest hydraulic quadruped robot that has been built so far. MiniHyQ is a fully torque controlled robot. It has reconfigurable leg configurations. It has wide joint range of motion and an onboard compact power pack. The robot has almost the same leg length as the previous robot (HyQ [1], built by our group), but its link segment lengths are 15% less in flex configuration, due to the special isogram knee joint mechanism. Its weight is only 35kg (24kg with an offboard pump unit), which makes it portable by one person. To achieve this lightweight, miniature hydraulic actuators were carefully selected, allowing us to reduce the required pump size inside the torso. By using a hydraulic rotary actuator for the hip and linear actuators with isogram mechanism for the knee joint, a wider range of motion is achieved, allowing a self-righting motion. For the design validation and hardware testing, series of experiments are conducted on MiniHyQ single leg.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134510343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219681
Daniel J. Brooks, E. McCann, Jordan Allspaw, M. Medvedev, H. Yanco
In the field of human-robot interaction, collaborative and/or adversarial game play can be used as a testbed to evaluate theories and hypotheses in areas such as resolving problems with another agent's work and turn-taking etiquette. It is often the case that such interactions are encumbered by constraints made to allow the robot to function. This may affect interactions by impeding a participant's generalization of their interaction with the robot to similar previous interactions they have had with people. We present a checkers playing system that, with minimal constraints, can play checkers with a human, even crowning the human's kings by placing a piece atop the appropriate checker. Our board and pieces were purchased online, and only required the addition of colored stickers on the checkers to contrast them with the board. This paper describes our system design and evaluates its performance and accuracy by playing games with twelve human players.
{"title":"Sense, plan, triple jump","authors":"Daniel J. Brooks, E. McCann, Jordan Allspaw, M. Medvedev, H. Yanco","doi":"10.1109/TePRA.2015.7219681","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219681","url":null,"abstract":"In the field of human-robot interaction, collaborative and/or adversarial game play can be used as a testbed to evaluate theories and hypotheses in areas such as resolving problems with another agent's work and turn-taking etiquette. It is often the case that such interactions are encumbered by constraints made to allow the robot to function. This may affect interactions by impeding a participant's generalization of their interaction with the robot to similar previous interactions they have had with people. We present a checkers playing system that, with minimal constraints, can play checkers with a human, even crowning the human's kings by placing a piece atop the appropriate checker. Our board and pieces were purchased online, and only required the addition of colored stickers on the checkers to contrast them with the board. This paper describes our system design and evaluates its performance and accuracy by playing games with twelve human players.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133051747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219657
Daiki Suzuki, Yusuke Yamanoi, H. Yamada, Ko Wakita, R. Kato, H. Yokoi
A stationary grasping posture is classified in the control method of an electromyogram prosthetic hand. This grasping posture is static, such as an open hand posture, and one in which the operator of an electromyogram prosthetic hand intentionally continues muscular contraction. In classifying the stationary grasping posture, a movement delay of the robot hand occurs, which feels unnaturally to the operator. To solve these problems, authors propose a method that predicts a grasping posture using the surface electromyogram (sEMG) of low muscle contraction power in hand pre-shaping. In this paper, our research on the performance of grasping posture classification using sEMG for naturally reaching for and grasping an object is presented. Experimental results demonstrate that when the sEMG amplitude peaks in hand pre-shaping, it is useful in classifying the grasping posture.
{"title":"14. Grasping-posture classification using myoelectric signal on hand pre-shaping for natural control of myoelectric hand","authors":"Daiki Suzuki, Yusuke Yamanoi, H. Yamada, Ko Wakita, R. Kato, H. Yokoi","doi":"10.1109/TePRA.2015.7219657","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219657","url":null,"abstract":"A stationary grasping posture is classified in the control method of an electromyogram prosthetic hand. This grasping posture is static, such as an open hand posture, and one in which the operator of an electromyogram prosthetic hand intentionally continues muscular contraction. In classifying the stationary grasping posture, a movement delay of the robot hand occurs, which feels unnaturally to the operator. To solve these problems, authors propose a method that predicts a grasping posture using the surface electromyogram (sEMG) of low muscle contraction power in hand pre-shaping. In this paper, our research on the performance of grasping posture classification using sEMG for naturally reaching for and grasping an object is presented. Experimental results demonstrate that when the sEMG amplitude peaks in hand pre-shaping, it is useful in classifying the grasping posture.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122697701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219693
Mehmet Ali Guney, I. Raptis
In the recent years, Autonomous Guided Vehicles (AGVs) are gradually integrated to warehouse management systems. The employment of AGVs has numerous advantages over conventional warehouse systems in terms of cost, scalability and efficiency. In this work, we present the development of a small-scale test-bed platform for testing and validating warehouse automation control algorithms utilizing a swarm of AGVs. The proposed platform is scalable, fast, and effective in both cost and dimensions. The robotic drives are centimeter-scale forklifts that transport autonomously an arbitrary number of circular pallets to predefined reference locations. A conflict resolution algorithm is implemented such that the drives do not collide with each other during their operation. In addition, a task allocation logic handles the pallets' assignment to avoid the enclosure of the drives by the transported objects. The applicability of the testbed platform is demonstrated through experimental results.
{"title":"A robotic experimental platform for testing and validating warehouse automation algorithms","authors":"Mehmet Ali Guney, I. Raptis","doi":"10.1109/TePRA.2015.7219693","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219693","url":null,"abstract":"In the recent years, Autonomous Guided Vehicles (AGVs) are gradually integrated to warehouse management systems. The employment of AGVs has numerous advantages over conventional warehouse systems in terms of cost, scalability and efficiency. In this work, we present the development of a small-scale test-bed platform for testing and validating warehouse automation control algorithms utilizing a swarm of AGVs. The proposed platform is scalable, fast, and effective in both cost and dimensions. The robotic drives are centimeter-scale forklifts that transport autonomously an arbitrary number of circular pallets to predefined reference locations. A conflict resolution algorithm is implemented such that the drives do not collide with each other during their operation. In addition, a task allocation logic handles the pallets' assignment to avoid the enclosure of the drives by the transported objects. The applicability of the testbed platform is demonstrated through experimental results.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123552046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219694
Ming Luo, E. Skorina, Weijia Tao, Fuchen Chen, C. Onal
Soft actuators can be useful in human-occupied environments because of their adaptable compliance and light weight. We previously introduced a variation of fluidic soft actuators we call the reverse pneumatic artificial muscle (rPAM), and developed an analytical model to predict its performance both individually and while driving a 1 degree of freedom revolute joint antagonistically. Here, we expand upon this previous work, adding a correction term to improve model performance and using it to perform optimization on the kinematic module dimensions to maximize achievable joint angles. We also offer advances on the joint design to improve its ability to operate at these larger angles. The new joint had a workspace of around ±60°, which was predicted accurately by the improved model.
{"title":"Optimized design of a rigid kinematic module for antagonistic soft actuation","authors":"Ming Luo, E. Skorina, Weijia Tao, Fuchen Chen, C. Onal","doi":"10.1109/TePRA.2015.7219694","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219694","url":null,"abstract":"Soft actuators can be useful in human-occupied environments because of their adaptable compliance and light weight. We previously introduced a variation of fluidic soft actuators we call the reverse pneumatic artificial muscle (rPAM), and developed an analytical model to predict its performance both individually and while driving a 1 degree of freedom revolute joint antagonistically. Here, we expand upon this previous work, adding a correction term to improve model performance and using it to perform optimization on the kinematic module dimensions to maximize achievable joint angles. We also offer advances on the joint design to improve its ability to operate at these larger angles. The new joint had a workspace of around ±60°, which was predicted accurately by the improved model.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127187992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219692
M. Leroux, M. Raison, T. Adadja, S. Achiche
The manual control of manipulator robots can be complex and time consuming even for simple tasks, due to a number of degrees of freedom (DoF) of the robot that is higher than the number of simultaneous commands of the joystick. Among the emerging solutions, the eyetracking, which identifies the user gaze direction, is expected to automatically command some of the robot DoFs. However, the use of eyetracking in three dimensions (3D) still gives large and variable errors from several centimeters to several meters. The objective of this paper, is to combine eyetracking and computer vision to automate the approach of a robot to its targeted point by acquiring its 3D location. The methodology combines three steps : - A regular eyetracking device measures the user mean gaze direction. - The field of view of the user is recorded using a webcam, and the targeted point identified by image analysis. - The distance between the target and the user is computed using geometrical reconstruction, providing a 3D location point for the target. On 3 trials, the error analysis reveals that the computed coordinates of the target 3D localization has an average error of 5.5cm, which is 92% more accurate than using the eyetracking only for point of gaze calculation, with an estimated error of 72cm. Finally, we discuss an innovative way to complete the system with smart targets to overcome some of the current limitations of the proposed method.
{"title":"Combination of eyetracking and computer vision for robotics control","authors":"M. Leroux, M. Raison, T. Adadja, S. Achiche","doi":"10.1109/TePRA.2015.7219692","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219692","url":null,"abstract":"The manual control of manipulator robots can be complex and time consuming even for simple tasks, due to a number of degrees of freedom (DoF) of the robot that is higher than the number of simultaneous commands of the joystick. Among the emerging solutions, the eyetracking, which identifies the user gaze direction, is expected to automatically command some of the robot DoFs. However, the use of eyetracking in three dimensions (3D) still gives large and variable errors from several centimeters to several meters. The objective of this paper, is to combine eyetracking and computer vision to automate the approach of a robot to its targeted point by acquiring its 3D location. The methodology combines three steps : - A regular eyetracking device measures the user mean gaze direction. - The field of view of the user is recorded using a webcam, and the targeted point identified by image analysis. - The distance between the target and the user is computed using geometrical reconstruction, providing a 3D location point for the target. On 3 trials, the error analysis reveals that the computed coordinates of the target 3D localization has an average error of 5.5cm, which is 92% more accurate than using the eyetracking only for point of gaze calculation, with an estimated error of 72cm. Finally, we discuss an innovative way to complete the system with smart targets to overcome some of the current limitations of the proposed method.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127191422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219686
Benjamin Axelrod, Wesley H. Huang
In order to access many spaces in human environments, mobile robots need to be adept at using doors: opening the door, traversing (i.e., passing through) the doorway, and possibly closing the door afterwards. The challenges in these problems vary with the type of door (push-/pull-doors, self-closing mechanisms, etc.) and type of door handle (knob, lever, crashbar, etc.) In addition, the capabilities and limitations of the robot can have a strong effect on the techniques and strategies needed for these tasks. We have developed a system that autonomously opens and traverses push- and pull-doors, with or without self-closing mechanisms, with knobs or levers, using an iRobot 510 PackBot® (a nonholonomic mobile base with a 5 degree-of-freedom arm) and a custom gripper with a passive 2 degree-of-freedom wrist. To the best of our knowledge, our system is the first to demonstrate autonomous door opening and traversal on the most challenging combination of a pull-door with a self-closing mechanism. In this paper, we describe the operation of our system and the results of our experimental testing.
{"title":"Autonomous door opening and traversal","authors":"Benjamin Axelrod, Wesley H. Huang","doi":"10.1109/TePRA.2015.7219686","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219686","url":null,"abstract":"In order to access many spaces in human environments, mobile robots need to be adept at using doors: opening the door, traversing (i.e., passing through) the doorway, and possibly closing the door afterwards. The challenges in these problems vary with the type of door (push-/pull-doors, self-closing mechanisms, etc.) and type of door handle (knob, lever, crashbar, etc.) In addition, the capabilities and limitations of the robot can have a strong effect on the techniques and strategies needed for these tasks. We have developed a system that autonomously opens and traverses push- and pull-doors, with or without self-closing mechanisms, with knobs or levers, using an iRobot 510 PackBot® (a nonholonomic mobile base with a 5 degree-of-freedom arm) and a custom gripper with a passive 2 degree-of-freedom wrist. To the best of our knowledge, our system is the first to demonstrate autonomous door opening and traversal on the most challenging combination of a pull-door with a self-closing mechanism. In this paper, we describe the operation of our system and the results of our experimental testing.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130808298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219698
A. Dang, J. Horn
In this paper, we propose a novel approach to control autonomous robots to achieve a desired linear formation during movement towards the target position. Firstly, one robot, which has the closest distance to the target, is selected as the leader of the swarm. The desired formation is built based on the relative position between this leader and the target. Secondly, the trajectory of the remaining robots towards the optimal positions in the desired formation is driven by the artificial force fields. These force fields consist of the local and global attractive potential fields surrounding each virtual node in the desired formation. Furthermore, an orientation controller is added in order to guarantee that the desired formation is always headed in the invariant direction to the target position. In addition, the local repulsive force fields around each robot and obstacle are employed in order to avoid collisions during movement. The stability of a swarm following a desired collinear formation in invariant direction towards the target is verified in simulations and experiments.
{"title":"Collinear formation control of autonomous robots to move towards a target using artificial force fields","authors":"A. Dang, J. Horn","doi":"10.1109/TePRA.2015.7219698","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219698","url":null,"abstract":"In this paper, we propose a novel approach to control autonomous robots to achieve a desired linear formation during movement towards the target position. Firstly, one robot, which has the closest distance to the target, is selected as the leader of the swarm. The desired formation is built based on the relative position between this leader and the target. Secondly, the trajectory of the remaining robots towards the optimal positions in the desired formation is driven by the artificial force fields. These force fields consist of the local and global attractive potential fields surrounding each virtual node in the desired formation. Furthermore, an orientation controller is added in order to guarantee that the desired formation is always headed in the invariant direction to the target position. In addition, the local repulsive force fields around each robot and obstacle are employed in order to avoid collisions during movement. The stability of a swarm following a desired collinear formation in invariant direction towards the target is verified in simulations and experiments.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130812395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}