Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00029
E. Zakeri, W. Xie
In this paper, a novel adaptive robust control scheme is proposed for pose correction of eye-to-hand photogrammetry-based industrial robots subject to uncertainties. The proposed method uses two control loops: internal and external loops. The former is the dynamic controller designed for controlling the robot’s joints. The external loop is the kinematic controller to correct the pose error using the estimated end-effector’s pose acquired by the photogrammetry sensor (in this research C-track AMETEK). An adaptive integral terminal fractional-order super-twisting algorithm (AITFOSTA) is developed and employed for both control loops. AITFOSTA is an integral sliding mode controller (ISMC) whose nominal control law is a terminal one and its switching part is replaced with a fractional-order super-twisting algorithm (FOSTA), reducing the chattering to a great extent while rejecting the uncertainties. Additionally, an adaptive uncertainty and disturbance estimator based on radial basis function neural network (RBFNN) is designed and used as a compensator to reduce the uncertainty bounds, contributing to further chattering reduction. The stability analysis of the proposed controller is also presented. Experimental results on a PUMA200 industrial robot show superiority of the proposed method over other well-known approaches by reaching an unprecedented tracking accuracy, i.e., 0.06 mm and 0.18 deg for position and orientation, respectively.
{"title":"Robust Photogrammetry-Based Online Pose Correction of Industrial Robots Employing Adaptive Integral Terminal Fractional-Order Super-Twisting Algorithm","authors":"E. Zakeri, W. Xie","doi":"10.1109/IRC55401.2022.00029","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00029","url":null,"abstract":"In this paper, a novel adaptive robust control scheme is proposed for pose correction of eye-to-hand photogrammetry-based industrial robots subject to uncertainties. The proposed method uses two control loops: internal and external loops. The former is the dynamic controller designed for controlling the robot’s joints. The external loop is the kinematic controller to correct the pose error using the estimated end-effector’s pose acquired by the photogrammetry sensor (in this research C-track AMETEK). An adaptive integral terminal fractional-order super-twisting algorithm (AITFOSTA) is developed and employed for both control loops. AITFOSTA is an integral sliding mode controller (ISMC) whose nominal control law is a terminal one and its switching part is replaced with a fractional-order super-twisting algorithm (FOSTA), reducing the chattering to a great extent while rejecting the uncertainties. Additionally, an adaptive uncertainty and disturbance estimator based on radial basis function neural network (RBFNN) is designed and used as a compensator to reduce the uncertainty bounds, contributing to further chattering reduction. The stability analysis of the proposed controller is also presented. Experimental results on a PUMA200 industrial robot show superiority of the proposed method over other well-known approaches by reaching an unprecedented tracking accuracy, i.e., 0.06 mm and 0.18 deg for position and orientation, respectively.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125202450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00074
V. Parque, T. Miyashita
This paper studies the path regression problem, that is learning motion planning functions that render trajectories from initial to end robot configurations in a single forward pass. To this end, we have studied the path regression problem using the linear transition in the configuration space and shallow neural schemes based on Extreme Learning Machines. Our computational experiments involving a relevant and diverse set of 6-DOF robot trajectories have shown path regression’s feasibility and practical efficiency with attractive generalization performance in out-of-sample observations. In particular, we show that it is possible to learn neural policies for path regression in about 10 ms. - 31 ms. and achieving 10−3 – 10−6 Mean Squared Error on unseen out-of-sample scenarios. We believe our approach has the potential to explore efficient algorithms for learning-based motion planning.
本文研究了路径回归问题,即学习运动规划函数,该函数可以在单次向前通过中呈现机器人从初始构型到末端构型的轨迹。为此,我们利用组态空间的线性转移和基于极限学习机的浅神经方案研究了路径回归问题。我们对一组相关且多样的六自由度机器人轨迹进行了计算实验,结果表明路径回归在样本外观测中具有可行性和实用效率,并具有良好的泛化性能。特别是,我们表明,在大约10 ms - 31 ms的时间内学习路径回归的神经策略是可能的,并且在未见过的样本外场景下实现10−3 - 10−6的均方误差。我们相信我们的方法有潜力为基于学习的运动规划探索有效的算法。
{"title":"On Path Regression with Extreme Learning and the Linear Configuration Space","authors":"V. Parque, T. Miyashita","doi":"10.1109/IRC55401.2022.00074","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00074","url":null,"abstract":"This paper studies the path regression problem, that is learning motion planning functions that render trajectories from initial to end robot configurations in a single forward pass. To this end, we have studied the path regression problem using the linear transition in the configuration space and shallow neural schemes based on Extreme Learning Machines. Our computational experiments involving a relevant and diverse set of 6-DOF robot trajectories have shown path regression’s feasibility and practical efficiency with attractive generalization performance in out-of-sample observations. In particular, we show that it is possible to learn neural policies for path regression in about 10 ms. - 31 ms. and achieving 10−3 – 10−6 Mean Squared Error on unseen out-of-sample scenarios. We believe our approach has the potential to explore efficient algorithms for learning-based motion planning.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130889765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00059
C. Sevastopoulos, Michail Theofanidis, Mohammad Zaki Zadeh, Sneh Acharya, S. Konstantopoulos, V. Karkaletsis, F. Makedon
We present a method for binary (go/no-go) indoors traversability estimation from 2D images. Our method exploits the power of a pre-trained Vision Transformer (ViT) which we fine-tune on our own dataset. We conduct experiments using a mobile robotic platform to gather image data. Our fine-tuning approach includes the use of a pre-trained Vision Transformer (ViT) en route towards developing a semi-supervised deep learning technique to enhance indoor traversability estimation for scenarios where only a small amount of data is available. We evaluate the accuracy and generalization power of our method against well-established state-of-the-art deep architectures for image classification such as ResNet, and show improved performance.
{"title":"Indoors Traversability Estimation with Less Labels for Mobile Robots","authors":"C. Sevastopoulos, Michail Theofanidis, Mohammad Zaki Zadeh, Sneh Acharya, S. Konstantopoulos, V. Karkaletsis, F. Makedon","doi":"10.1109/IRC55401.2022.00059","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00059","url":null,"abstract":"We present a method for binary (go/no-go) indoors traversability estimation from 2D images. Our method exploits the power of a pre-trained Vision Transformer (ViT) which we fine-tune on our own dataset. We conduct experiments using a mobile robotic platform to gather image data. Our fine-tuning approach includes the use of a pre-trained Vision Transformer (ViT) en route towards developing a semi-supervised deep learning technique to enhance indoor traversability estimation for scenarios where only a small amount of data is available. We evaluate the accuracy and generalization power of our method against well-established state-of-the-art deep architectures for image classification such as ResNet, and show improved performance.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131565797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the navigation capabilities of mobile robots in off-road environments have increased significantly, opening up new potential applications in a variety of settings. By accurately identifying different classes of terrain in unstructured environments, safe automated navigation can be supported. However, to enable safe path planning and execution, the traversability costs of the terrain classes need to be estimated. Such estimation is often performed manually by experts who possess information about the environment and are familiar with the capabilities of the robotic system. In this paper, we present an automated pipeline for generating traversability costs that use recorded locomotion data and descriptive information on the terrain obtained from earth observation data. The main contribution is that the cost estimation for different terrain classes is based on locomotion data obtained in simple standardized experiments. Moreover, by repeating the experiments with different robot systems we are easily able to identify the actual capabilities of that systems. Experiments were conducted in an alpine off-road environment to record locomotion data of four different robot systems and to investigate the performance and validity of the proposed pipeline. The recorded locomotion data for the different robots are publicly available at https://robonav.ist.tugraz.at/data/
{"title":"Generating Robot-Dependent Cost Maps for Off-Road Environments Using Locomotion Experiments and Earth Observation Data*","authors":"Matthias Eder, Raphael Prinz, Florian Schöggl, Gerald Steinbauer-Wagner","doi":"10.1109/IRC55401.2022.00036","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00036","url":null,"abstract":"In recent years, the navigation capabilities of mobile robots in off-road environments have increased significantly, opening up new potential applications in a variety of settings. By accurately identifying different classes of terrain in unstructured environments, safe automated navigation can be supported. However, to enable safe path planning and execution, the traversability costs of the terrain classes need to be estimated. Such estimation is often performed manually by experts who possess information about the environment and are familiar with the capabilities of the robotic system. In this paper, we present an automated pipeline for generating traversability costs that use recorded locomotion data and descriptive information on the terrain obtained from earth observation data. The main contribution is that the cost estimation for different terrain classes is based on locomotion data obtained in simple standardized experiments. Moreover, by repeating the experiments with different robot systems we are easily able to identify the actual capabilities of that systems. Experiments were conducted in an alpine off-road environment to record locomotion data of four different robot systems and to investigate the performance and validity of the proposed pipeline. The recorded locomotion data for the different robots are publicly available at https://robonav.ist.tugraz.at/data/","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115225133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00076
Ashutosh Singandhupe, Hung M. La, Q. Ha
This work focuses on finding the extrinsic parameters (rotation and translation) between Lidar and an RGB camera sensor. We use a planar checkerboard and place it inside the Field-of-View (FOV) of both sensors, where we extract the 3D plane information of the checkerboard acquired from the sensor’s data. The plane coefficients extracted from the sensor’s data are used to construct a well-structured set of 3D points. These 3D points are then ’aligned,’ which gives the relative transformation between the two sensors. We use our proposed Correntropy Similarity Matrix Iterative Closest Point (CoSMICP) Algorithm to estimate the relative transformation. This work uses a single frame of the point cloud data acquired from the Lidar sensor and a single frame from the calibrated camera data to perform this operation. From the camera image, we use the projection of the calibration target’s corner points to compute the 3D points, and along the process, we calculate the 3D plane equation using the corner points. We evaluate our approach on a simulated dataset with complex environment settings, making use of the freedom to assess under multiple configurations. Through the obtained results, we verify our method under various configurations.
{"title":"Single Frame Lidar-Camera Calibration Using Registration of 3D Planes","authors":"Ashutosh Singandhupe, Hung M. La, Q. Ha","doi":"10.1109/IRC55401.2022.00076","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00076","url":null,"abstract":"This work focuses on finding the extrinsic parameters (rotation and translation) between Lidar and an RGB camera sensor. We use a planar checkerboard and place it inside the Field-of-View (FOV) of both sensors, where we extract the 3D plane information of the checkerboard acquired from the sensor’s data. The plane coefficients extracted from the sensor’s data are used to construct a well-structured set of 3D points. These 3D points are then ’aligned,’ which gives the relative transformation between the two sensors. We use our proposed Correntropy Similarity Matrix Iterative Closest Point (CoSMICP) Algorithm to estimate the relative transformation. This work uses a single frame of the point cloud data acquired from the Lidar sensor and a single frame from the calibrated camera data to perform this operation. From the camera image, we use the projection of the calibration target’s corner points to compute the 3D points, and along the process, we calculate the 3D plane equation using the corner points. We evaluate our approach on a simulated dataset with complex environment settings, making use of the freedom to assess under multiple configurations. Through the obtained results, we verify our method under various configurations.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115574578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00018
Marcell Wolnitza, Osman Kaya, T. Kulvicius, F. Wörgötter, B. Dellen
We propose a method for 3D object reconstruction and 6D pose estimation from 2D images that uses knowledge about object shape as the primary key. In the proposed pipeline, recognition and labeling of objects in 2D images deliver 2D segment silhouettes that are compared with the 2D silhouettes of projections obtained from various views of a 3D model representing the recognized object class. Transformation parameters are computed directly from the 2D images, making the approach feasible. Furthermore, 3D transformations and projective geometry are employed to arrive at a full 3D reconstruction of the object in camera space using a calibrated setup. The method is quantitatively evaluated using synthetic data and tested with real data. In robot experiments, successful grasping of objects demonstrates its usability in real-world environments. The method is applicable to scenarios where 3D object models, e.g., CAD-models or point clouds, are available and precise pixel-wise segmentation maps of 2D images can be obtained. Different from other methods, the method does not use 3D depth for training, widening the domain of application.
{"title":"6D pose estimation and 3D object reconstruction from 2D shape for robotic grasping of objects","authors":"Marcell Wolnitza, Osman Kaya, T. Kulvicius, F. Wörgötter, B. Dellen","doi":"10.1109/IRC55401.2022.00018","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00018","url":null,"abstract":"We propose a method for 3D object reconstruction and 6D pose estimation from 2D images that uses knowledge about object shape as the primary key. In the proposed pipeline, recognition and labeling of objects in 2D images deliver 2D segment silhouettes that are compared with the 2D silhouettes of projections obtained from various views of a 3D model representing the recognized object class. Transformation parameters are computed directly from the 2D images, making the approach feasible. Furthermore, 3D transformations and projective geometry are employed to arrive at a full 3D reconstruction of the object in camera space using a calibrated setup. The method is quantitatively evaluated using synthetic data and tested with real data. In robot experiments, successful grasping of objects demonstrates its usability in real-world environments. The method is applicable to scenarios where 3D object models, e.g., CAD-models or point clouds, are available and precise pixel-wise segmentation maps of 2D images can be obtained. Different from other methods, the method does not use 3D depth for training, widening the domain of application.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116034991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00052
Menaxi J. Bagchi, Divya D. Kulkarni, S. B. Nair, P. K. Das
In real-world robotic systems, new jobs may need to be executed even when the robots are executing the ones assigned earlier. These new jobs can crop up asynchronously and on-the-fly. In this paper, we propose a mechanism that uses the concept of dataflow computing coupled with mobile agent technology, to ensure that new jobs can be added online with minimal interference to the currently executing jobs. The entire system need not be brought to a standstill because of the addition and execution of the new jobs. The proposed mechanism facilitates simultaneous executions of the new jobs along with the ones currently executing. The mechanism also helps make the overall system constituting robots, embedded systems, sensors, personal computers, and mobile and static agents, execute in a decentralized and distributed manner. Experiments are conducted in both emulated and real worlds, using such heterogeneous entities to portray the proposed mechanism.
{"title":"On Embedding a Dataflow Architecture in a Multi-Robot System","authors":"Menaxi J. Bagchi, Divya D. Kulkarni, S. B. Nair, P. K. Das","doi":"10.1109/IRC55401.2022.00052","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00052","url":null,"abstract":"In real-world robotic systems, new jobs may need to be executed even when the robots are executing the ones assigned earlier. These new jobs can crop up asynchronously and on-the-fly. In this paper, we propose a mechanism that uses the concept of dataflow computing coupled with mobile agent technology, to ensure that new jobs can be added online with minimal interference to the currently executing jobs. The entire system need not be brought to a standstill because of the addition and execution of the new jobs. The proposed mechanism facilitates simultaneous executions of the new jobs along with the ones currently executing. The mechanism also helps make the overall system constituting robots, embedded systems, sensors, personal computers, and mobile and static agents, execute in a decentralized and distributed manner. Experiments are conducted in both emulated and real worlds, using such heterogeneous entities to portray the proposed mechanism.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123510267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00049
Merlin Stampa, Uwe Jahn, D. Fruhner, Tim Streckert, Christof Röhrig
This work presents a scenario and a system concept for an Unmanned Aerial Vehicle (UAV) teamed with an Unmanned Ground Vehicle (UGV) and a base station for firefighting tasks. Based on a detailed scenario description, we investigate tangible design choices regarding relevant hardware and algorithms based on today’s technology. We conclude that the implementation of a functional prototype appears feasible.
{"title":"Scenario and system concept for a firefighting UAV-UGV team","authors":"Merlin Stampa, Uwe Jahn, D. Fruhner, Tim Streckert, Christof Röhrig","doi":"10.1109/IRC55401.2022.00049","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00049","url":null,"abstract":"This work presents a scenario and a system concept for an Unmanned Aerial Vehicle (UAV) teamed with an Unmanned Ground Vehicle (UGV) and a base station for firefighting tasks. Based on a detailed scenario description, we investigate tangible design choices regarding relevant hardware and algorithms based on today’s technology. We conclude that the implementation of a functional prototype appears feasible.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121552854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00030
Pascal Hinrichs, Minh Tam Vu, M. Pfingsthorn, C. Kowalski, A. Hein
Whole-arm manipulation (WAM) is an ideal way to manipulate large and heavy loads. Whole-arm configuration generation algorithms are always constructed for a specific manipulator type. However, through increased transferability between types, advances in whole-arm manipulation could be applied more easily and developed to other manipulators. We therefore present two algorithms for transferring a whole-arm configuration from a manipulator without link offsets to one with link offsets in the elbow and wrist in this work. Both are analytical, where one can be solved algebraically, the other one only numerically. We show the advantages and disadvantages of these two variants and compare them with a memetic evolution algorithm as a baseline. It is shown that our algorithms require only about one thousandth of the computational time and achieve a significantly smaller variance in the joint position solutions.
{"title":"Analytical Solutions for Two-Contact Whole-Arm Manipulation Inverse Kinematics for Manipulators with Link Offsets","authors":"Pascal Hinrichs, Minh Tam Vu, M. Pfingsthorn, C. Kowalski, A. Hein","doi":"10.1109/IRC55401.2022.00030","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00030","url":null,"abstract":"Whole-arm manipulation (WAM) is an ideal way to manipulate large and heavy loads. Whole-arm configuration generation algorithms are always constructed for a specific manipulator type. However, through increased transferability between types, advances in whole-arm manipulation could be applied more easily and developed to other manipulators. We therefore present two algorithms for transferring a whole-arm configuration from a manipulator without link offsets to one with link offsets in the elbow and wrist in this work. Both are analytical, where one can be solved algebraically, the other one only numerically. We show the advantages and disadvantages of these two variants and compare them with a memetic evolution algorithm as a baseline. It is shown that our algorithms require only about one thousandth of the computational time and achieve a significantly smaller variance in the joint position solutions.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125619648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00016
Swarnabha Roy, Dharmendra Baruah, Steven Hernandez, Stavros Kalafatis
As the task complexity that robots can handle increases, the importance of coordination among multiple robots to accomplish a task has grown significantly. Existing research has primarily focused on developing a modular network topology, or a communication protocol solely focused on data communication but largely ignoring load-balancing optimizations. We discuss the existing modular robotic architectures, compare them in data and load sharing and propose a new load balancing protocol. Our load balancing system is based on having modular robotic clusters that work together by sharing resources through Kubernetes. This system improves reliability through information sharing, shared hardware resources among the robots, and scalability, allowing the architecture to expand based on need.
{"title":"Distributed Computation and Dynamic Load balancing in Modular Edge Robotics","authors":"Swarnabha Roy, Dharmendra Baruah, Steven Hernandez, Stavros Kalafatis","doi":"10.1109/IRC55401.2022.00016","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00016","url":null,"abstract":"As the task complexity that robots can handle increases, the importance of coordination among multiple robots to accomplish a task has grown significantly. Existing research has primarily focused on developing a modular network topology, or a communication protocol solely focused on data communication but largely ignoring load-balancing optimizations. We discuss the existing modular robotic architectures, compare them in data and load sharing and propose a new load balancing protocol. Our load balancing system is based on having modular robotic clusters that work together by sharing resources through Kubernetes. This system improves reliability through information sharing, shared hardware resources among the robots, and scalability, allowing the architecture to expand based on need.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133754961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}