Pub Date : 2015-08-27DOI: 10.1109/TePRA.2015.7219680
V. Zakharov, Richard Cubek, W. Ertel
Most service robots are built up on commercially available platforms, that need to be augmented for a particular application. A common problem is to ensure reliable behavior for real-time demanding tasks, like collision detection and emergency braking, that can not be guaranteed using top-level software functions, running on a general purpose PC. We introduce an approach of transparent integration of real-time capable hardware for robust control and propose an open-source solution for a group of platforms, controlled via standard buses. From the software perspective the basic safety functions integration is done without changing of top-level software. We demonstrate how the inherited safety problems of the particular robotic platform group can be mitigated using the given approach and present the evaluation results of the implemented solution.
{"title":"Transparent integration of a real-time collision safety system to a motor control chain of a service robot","authors":"V. Zakharov, Richard Cubek, W. Ertel","doi":"10.1109/TePRA.2015.7219680","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219680","url":null,"abstract":"Most service robots are built up on commercially available platforms, that need to be augmented for a particular application. A common problem is to ensure reliable behavior for real-time demanding tasks, like collision detection and emergency braking, that can not be guaranteed using top-level software functions, running on a general purpose PC. We introduce an approach of transparent integration of real-time capable hardware for robust control and propose an open-source solution for a group of platforms, controlled via standard buses. From the software perspective the basic safety functions integration is done without changing of top-level software. We demonstrate how the inherited safety problems of the particular robotic platform group can be mitigated using the given approach and present the evaluation results of the implemented solution.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125104966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219666
Chelsea Lau, Katie Byl
We propose a new extend function for Rapidly-Exploring Randomized Tree (RRT) algorithms that expands along a curve, obeying velocity and acceleration limits, rather than using straight-line trajectories. This results in smooth, feasible trajectories that can readily be applied in robotics applications. Our main focus is the implementation of such methods on RoboSimian, a quadruped robot competing in the DARPA Robotics Challenge (DRC). Planning in a high-dimensional space is also a large consideration in the evaluation of the techniques discussed in this paper as motion planning for RoboSimian requires a search over a 16-dimensional space. In our experiments, we show that our approach produces results that are comparable to the standard RRT solutions in a two-dimensional space and significantly outperforms the latter in a higher-dimensional setting both in computation time and in algorithm reliability.
{"title":"Smooth RRT-connect: An extension of RRT-connect for practical use in robots","authors":"Chelsea Lau, Katie Byl","doi":"10.1109/TePRA.2015.7219666","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219666","url":null,"abstract":"We propose a new extend function for Rapidly-Exploring Randomized Tree (RRT) algorithms that expands along a curve, obeying velocity and acceleration limits, rather than using straight-line trajectories. This results in smooth, feasible trajectories that can readily be applied in robotics applications. Our main focus is the implementation of such methods on RoboSimian, a quadruped robot competing in the DARPA Robotics Challenge (DRC). Planning in a high-dimensional space is also a large consideration in the evaluation of the techniques discussed in this paper as motion planning for RoboSimian requires a search over a 16-dimensional space. In our experiments, we show that our approach produces results that are comparable to the standard RRT solutions in a two-dimensional space and significantly outperforms the latter in a higher-dimensional setting both in computation time and in algorithm reliability.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127542317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219668
C. Assad, Michael T. Wolf, Jaakko T. Karras, Jason I. Reid, A. Stoica
The JPL BioSleeve is a wearable gesture-based human interface for natural robot control. Activity of the user's hand and arm is monitored via surface electromyography sensors and an inertial measurement unit that are embedded in a forearm sleeve. Gesture recognition software then decodes the sensor signals, classifies gesture type, and maps the result to output commands to be sent to a robot. The BioSleeve interface can accurately and reliably decode as many as sixteen discrete hand and finger gestures and estimate the continuous orientation of the forearm. Here we report development of a new wireless BioSleeve prototype that enables portable field use. Gesture-based commands were developed to control a QinetiQ Dragon Runner tracked robot, including a 4 degree-of-freedom manipulator and a stereo camera pair. Gestures can be sent in several modes: for supervisory point-to-goal driving commands, virtual joystick for teleoperation of driving and manipulator, and pan-tilt of the camera. Hand gestures and arm positions are mapped to various commands recognized by the robot's onboard control software, and are meant to integrate with the robot's perception of its environment and its ability to complete tasks with various levels of autonomy. The portable BioSleeve interface was demonstrated through control of the Dragon Runner during participation in field trials at the 2014 Intuitive Robotic Operator Control Challenge. The successful completion of Challenge events demonstrated the versatility of the system to provide multiple commands in different modes of control to a robot operating under difficult real-world environmental conditions.
{"title":"JPL BioSleeve for gesture-based control: Technology development and field trials","authors":"C. Assad, Michael T. Wolf, Jaakko T. Karras, Jason I. Reid, A. Stoica","doi":"10.1109/TePRA.2015.7219668","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219668","url":null,"abstract":"The JPL BioSleeve is a wearable gesture-based human interface for natural robot control. Activity of the user's hand and arm is monitored via surface electromyography sensors and an inertial measurement unit that are embedded in a forearm sleeve. Gesture recognition software then decodes the sensor signals, classifies gesture type, and maps the result to output commands to be sent to a robot. The BioSleeve interface can accurately and reliably decode as many as sixteen discrete hand and finger gestures and estimate the continuous orientation of the forearm. Here we report development of a new wireless BioSleeve prototype that enables portable field use. Gesture-based commands were developed to control a QinetiQ Dragon Runner tracked robot, including a 4 degree-of-freedom manipulator and a stereo camera pair. Gestures can be sent in several modes: for supervisory point-to-goal driving commands, virtual joystick for teleoperation of driving and manipulator, and pan-tilt of the camera. Hand gestures and arm positions are mapped to various commands recognized by the robot's onboard control software, and are meant to integrate with the robot's perception of its environment and its ability to complete tasks with various levels of autonomy. The portable BioSleeve interface was demonstrated through control of the Dragon Runner during participation in field trials at the 2014 Intuitive Robotic Operator Control Challenge. The successful completion of Challenge events demonstrated the versatility of the system to provide multiple commands in different modes of control to a robot operating under difficult real-world environmental conditions.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116467472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219684
R. Bostelman, T. Hong, Gerry Cheok
Automatic guided vehicles (AGVs), an industrial form of a mobile robot, typically navigate using a central computer commanding AGV movement on predefined paths. How well they follow these paths is not well-defined in research articles and their performance is reported in non-standard manufacturer specifications. Furthermore, AGV technology is advancing towards vision guidance to map and localize their position from onboard the vehicle, whereas performance evaluation of advanced navigation techniques is just beginning. This paper describes AGV experiments using ground truth measurement comparison for performance evaluation of AGV navigation. A generic test procedure and metrics, described herein, are to be recommended to ASTM F45, a recently formed committee on performance of AGVs, as a navigation test method for use by the AGV and mobile robot industries.
{"title":"Navigation performance evaluation for automatic guided vehicles","authors":"R. Bostelman, T. Hong, Gerry Cheok","doi":"10.1109/TePRA.2015.7219684","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219684","url":null,"abstract":"Automatic guided vehicles (AGVs), an industrial form of a mobile robot, typically navigate using a central computer commanding AGV movement on predefined paths. How well they follow these paths is not well-defined in research articles and their performance is reported in non-standard manufacturer specifications. Furthermore, AGV technology is advancing towards vision guidance to map and localize their position from onboard the vehicle, whereas performance evaluation of advanced navigation techniques is just beginning. This paper describes AGV experiments using ground truth measurement comparison for performance evaluation of AGV navigation. A generic test procedure and metrics, described herein, are to be recommended to ASTM F45, a recently formed committee on performance of AGVs, as a navigation test method for use by the AGV and mobile robot industries.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133533511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219659
Long Bao, K. Panetta, S. Agaian
Image quality assessment becomes essential for autonomous systems, where processing occurs on an acquired image and is then used for detection and recognition of objects. Images exhibiting low quality and captured in the presence of noise that are used as the basis for image recognition systems can dramatically impair the overall recognition system's performance. In this paper, we will present a new distance double variance color image quality measure that does not require a reference image in order to make its evaluation of the quality of an image. The Distance Doubling Variance measure differs from existing color image quality methods, which typically attempt to extend traditional grayscale image approaches for color images. Here, we utilize the color properties in the color space, where we evaluate the difference between two color pixels by computing the distance in the color space using different weights for each of the color components. Based on this distance, we calculate the double variance of the distance matrix. This matrix consists of the maximum distance of each pixel and its corresponding neighboring pixels. To demonstrate its performance, we use the TID-2013 database, which includes 24 different types of distortions for different kinds of images. The simulations are compared with state-of-the-art methods to show the new method has high agreement with human's visual system in many types of distortions.
{"title":"A no reference image quality measure using a distance doubling variance","authors":"Long Bao, K. Panetta, S. Agaian","doi":"10.1109/TePRA.2015.7219659","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219659","url":null,"abstract":"Image quality assessment becomes essential for autonomous systems, where processing occurs on an acquired image and is then used for detection and recognition of objects. Images exhibiting low quality and captured in the presence of noise that are used as the basis for image recognition systems can dramatically impair the overall recognition system's performance. In this paper, we will present a new distance double variance color image quality measure that does not require a reference image in order to make its evaluation of the quality of an image. The Distance Doubling Variance measure differs from existing color image quality methods, which typically attempt to extend traditional grayscale image approaches for color images. Here, we utilize the color properties in the color space, where we evaluate the difference between two color pixels by computing the distance in the color space using different weights for each of the color components. Based on this distance, we calculate the double variance of the distance matrix. This matrix consists of the maximum distance of each pixel and its corresponding neighboring pixels. To demonstrate its performance, we use the TID-2013 database, which includes 24 different types of distortions for different kinds of images. The simulations are compared with state-of-the-art methods to show the new method has high agreement with human's visual system in many types of distortions.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128322606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219658
Christopher M. Reardon, Huan Tan, Balajee Kannan, Lynn A. DeRose
This paper proposes a human detection-based cognitive system for robots to work in human-existing environment and keep the safety of humans. An integrated system is implemented with perception, recognition, reasoning, decision-making, and action. Without using any traditional safety cages, a vision-based detection system is implemented for robots to monitor the environment and to detect humans. Subsequently, reasoning and decision making enables robots to evaluate the current safety-related situation for humans and provide corresponding safety signals. The decision making is based on maximizing the productivity of the robot in the manipulation process and keep the safety of humans in the environment. The system is implemented with a Baxter humanoid robot and a PowerBot mobile robot. Practical experiments and simulation experiments are carried out to validate our design.
{"title":"Towards safe robot-human collaboration systems using human pose detection","authors":"Christopher M. Reardon, Huan Tan, Balajee Kannan, Lynn A. DeRose","doi":"10.1109/TePRA.2015.7219658","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219658","url":null,"abstract":"This paper proposes a human detection-based cognitive system for robots to work in human-existing environment and keep the safety of humans. An integrated system is implemented with perception, recognition, reasoning, decision-making, and action. Without using any traditional safety cages, a vision-based detection system is implemented for robots to monitor the environment and to detect humans. Subsequently, reasoning and decision making enables robots to evaluate the current safety-related situation for humans and provide corresponding safety signals. The decision making is based on maximizing the productivity of the robot in the manipulation process and keep the safety of humans in the environment. The system is implemented with a Baxter humanoid robot and a PowerBot mobile robot. Practical experiments and simulation experiments are carried out to validate our design.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126764293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219663
Ankita Sikdar, Yuan F. Zheng, D. Xuan
Infrared sensors have been widely used in the field of robotics. This is primarily because these low cost and low power devices have a fast response rate that enhances realtime robotic systems. However, the use of these sensors in this field has been largely limited to proximity estimation and obstacle avoidance. In this paper, we attempt to extend the use of these sensors from just distance measurement to classification of direction of motion of a moving object or person in front of these sensors. A platform fitted with 3 infrared sensors is used to record distance measures at intervals of 100ms. A histogram based iterative clustering algorithm segments data into clusters, from which extracted features are fed to a classification algorithm to classify the motion direction. Experimental results validate the theory that these low cost infrared sensors can be successfully used to classify motion direction of a person in real time.
{"title":"An iterative clustering algorithm for classification of object motion direction using infrared sensor array","authors":"Ankita Sikdar, Yuan F. Zheng, D. Xuan","doi":"10.1109/TePRA.2015.7219663","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219663","url":null,"abstract":"Infrared sensors have been widely used in the field of robotics. This is primarily because these low cost and low power devices have a fast response rate that enhances realtime robotic systems. However, the use of these sensors in this field has been largely limited to proximity estimation and obstacle avoidance. In this paper, we attempt to extend the use of these sensors from just distance measurement to classification of direction of motion of a moving object or person in front of these sensors. A platform fitted with 3 infrared sensors is used to record distance measures at intervals of 100ms. A histogram based iterative clustering algorithm segments data into clusters, from which extracted features are fed to a classification algorithm to classify the motion direction. Experimental results validate the theory that these low cost infrared sensors can be successfully used to classify motion direction of a person in real time.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"440 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116147071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219697
Guy Martin
The tilted assumption, a remnant of analog cameras, is still in use. It seeks to compensate the off squareness of the image plane with the lens axis, but in fact creates a systematic shape alteration by introducing a scale variation in the image, and adds shear with the use of a skew parameter. It results in an image center bias which in turn offsets every single parameter estimate in the camera model. We disclose our own exact solution to the internal camera model, modeling the image plane as a pure projection, our related camera calibration method, and discuss the various improvements resulting from our find in almost every performance aspect of digital imaging.
{"title":"The major error in the camera model is the tilted axis assumption correction of a systematic perspective bias in the camera model","authors":"Guy Martin","doi":"10.1109/TePRA.2015.7219697","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219697","url":null,"abstract":"The tilted assumption, a remnant of analog cameras, is still in use. It seeks to compensate the off squareness of the image plane with the lens axis, but in fact creates a systematic shape alteration by introducing a scale variation in the image, and adds shear with the use of a skew parameter. It results in an image center bias which in turn offsets every single parameter estimate in the camera model. We disclose our own exact solution to the internal camera model, modeling the image plane as a pure projection, our related camera calibration method, and discuss the various improvements resulting from our find in almost every performance aspect of digital imaging.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116565627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219661
J. Rife
This paper presents ANIM, a novel algorithm that uses angle-of-arrival (or bearing) measurements for relative positioning of networked, collaborating robots. The algorithm targets shortcomings of existing sensors (e.g., vulnerability of GPS to jamming) by providing a cheap, low-power alternative that can exploit existing, readily available communication equipment. The method is decentralized and iterative, with subgroups of three robots alternately estimating (i) orientation of the plane containing the robots and (ii) the direction of each edge between robots. Simulations demonstrate that ANIM converges reliably and provides accuracy sufficient for practical applications involving coordinated flying robots.
{"title":"Design of a distributed localization algorithm to process angle-of-arrival measurements","authors":"J. Rife","doi":"10.1109/TePRA.2015.7219661","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219661","url":null,"abstract":"This paper presents ANIM, a novel algorithm that uses angle-of-arrival (or bearing) measurements for relative positioning of networked, collaborating robots. The algorithm targets shortcomings of existing sensors (e.g., vulnerability of GPS to jamming) by providing a cheap, low-power alternative that can exploit existing, readily available communication equipment. The method is decentralized and iterative, with subgroups of three robots alternately estimating (i) orientation of the plane containing the robots and (ii) the direction of each edge between robots. Simulations demonstrate that ANIM converges reliably and provides accuracy sufficient for practical applications involving coordinated flying robots.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127932194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-11DOI: 10.1109/TePRA.2015.7219687
Dylan Fyler, Benjamin Sullivan, I. Raptis
This paper presents a collaborative formation holding that translates and rotates an oversized object to a predefined reference location, utilizing a swarm of robots. Geometric equations and guidance feedback laws were derived for the swarm members to autonomously transport the object to its final location and orientation. Previous research in the field of cooperative robotics has been limited by the cost and space requirements of the swarm's individual agents. Our developed Arachne System provides a reconfigurable platform that is both low-cost and small-scale. The system consists of centimeter-scale mobile robots able to communicate wirelessly and interact with their environment. The entire configuration consists only of a webcam, laptop, and the robots themselves. The Arachne System was employed in the solution and implementation of the group box pushing challenge. Experimental results illustrate the capabilities of the system and the applicability of our approach.
{"title":"Distributed object manipulation using a mobile multi-agent system","authors":"Dylan Fyler, Benjamin Sullivan, I. Raptis","doi":"10.1109/TePRA.2015.7219687","DOIUrl":"https://doi.org/10.1109/TePRA.2015.7219687","url":null,"abstract":"This paper presents a collaborative formation holding that translates and rotates an oversized object to a predefined reference location, utilizing a swarm of robots. Geometric equations and guidance feedback laws were derived for the swarm members to autonomously transport the object to its final location and orientation. Previous research in the field of cooperative robotics has been limited by the cost and space requirements of the swarm's individual agents. Our developed Arachne System provides a reconfigurable platform that is both low-cost and small-scale. The system consists of centimeter-scale mobile robots able to communicate wirelessly and interact with their environment. The entire configuration consists only of a webcam, laptop, and the robots themselves. The Arachne System was employed in the solution and implementation of the group box pushing challenge. Experimental results illustrate the capabilities of the system and the applicability of our approach.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}