Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00051
Moritz P. Heimbach, J. Weber, M. Schmidt
In recent years, new successes in artificial intelligence and machine learning have been continuously achieved. However, this progress is largely based on the use of simulations as well as numerous powerful computers. Due to the volume taken up and the necessary power to run these components, this is not feasible for mobile robotics. Nevertheless, the use of machine learning in mobile robots is desirable in order to adapt to unknown or changing environmental conditions.This paper evaluates the performance of different reinforcement learning methods on a physical robot platform. This robot has an arm with two degrees of freedom that can be used to move across a surface. The goal is to learn the correct motion sequence of the arm to move the robot. The focus here is exclusively on using the robot’s onboard computer, a Raspberry Pi 4 Model B. To learn forward motion, Value Iteration and variants of Q-learning from the field of reinforcement learning are used.It is shown that since the structure of some problems can be described by a very limited problem space, even when using a physical robot relatively simple algorithms can yield sufficient learning results. Furthermore, hardware limitations may prevent using more complex algorithms.
近年来,人工智能和机器学习领域不断取得新成就。然而,这一进展很大程度上是基于模拟的使用以及大量强大的计算机。由于占用的体积和运行这些组件所需的功率,这对于移动机器人来说是不可行的。然而,为了适应未知或不断变化的环境条件,在移动机器人中使用机器学习是可取的。本文评估了不同强化学习方法在物理机器人平台上的性能。这个机器人的手臂有两个自由度,可以用来在一个表面上移动。目标是学习手臂的正确运动顺序来移动机器人。这里的重点是专门使用机器人的机载计算机,Raspberry Pi 4 Model B.为了学习向前运动,使用了强化学习领域的值迭代和Q-learning的变体。研究表明,由于一些问题的结构可以用非常有限的问题空间来描述,即使使用物理机器人,相对简单的算法也可以产生足够的学习结果。此外,硬件限制可能妨碍使用更复杂的算法。
{"title":"Training a robot with limited computing resources to crawl using reinforcement learning","authors":"Moritz P. Heimbach, J. Weber, M. Schmidt","doi":"10.1109/IRC55401.2022.00051","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00051","url":null,"abstract":"In recent years, new successes in artificial intelligence and machine learning have been continuously achieved. However, this progress is largely based on the use of simulations as well as numerous powerful computers. Due to the volume taken up and the necessary power to run these components, this is not feasible for mobile robotics. Nevertheless, the use of machine learning in mobile robots is desirable in order to adapt to unknown or changing environmental conditions.This paper evaluates the performance of different reinforcement learning methods on a physical robot platform. This robot has an arm with two degrees of freedom that can be used to move across a surface. The goal is to learn the correct motion sequence of the arm to move the robot. The focus here is exclusively on using the robot’s onboard computer, a Raspberry Pi 4 Model B. To learn forward motion, Value Iteration and variants of Q-learning from the field of reinforcement learning are used.It is shown that since the structure of some problems can be described by a very limited problem space, even when using a physical robot relatively simple algorithms can yield sufficient learning results. Furthermore, hardware limitations may prevent using more complex algorithms.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124954851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00024
Juann Kim, Dong-Whan Lee, Youngseop Kim, Heeyeon Shin, Yeeun Heo, Yaqin Wang, E. Matson
Drones have been studied in a variety of industries. Drone detection is one of the most important task. The goal of this paper is to detect the target drone using the microphone and a camera of the detecting drone by training deep learning models. For evaluation, three methods are used: visual-based, audio-based, and the decision fusion of both features. Image and audio data were collected from the detecting drone, by flying two drones in the sky at a fixed distance of 20m. CNN (Convolutional Neural Network) was used for audio, and YOLOv5 was used for computer vision. From the result, the decision fusion of audio and vision-based features showed the highest accuracy among the three evaluation methods.
{"title":"Deep Learning Based Malicious Drone Detection Using Acoustic and Image Data","authors":"Juann Kim, Dong-Whan Lee, Youngseop Kim, Heeyeon Shin, Yeeun Heo, Yaqin Wang, E. Matson","doi":"10.1109/IRC55401.2022.00024","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00024","url":null,"abstract":"Drones have been studied in a variety of industries. Drone detection is one of the most important task. The goal of this paper is to detect the target drone using the microphone and a camera of the detecting drone by training deep learning models. For evaluation, three methods are used: visual-based, audio-based, and the decision fusion of both features. Image and audio data were collected from the detecting drone, by flying two drones in the sky at a fixed distance of 20m. CNN (Convolutional Neural Network) was used for audio, and YOLOv5 was used for computer vision. From the result, the decision fusion of audio and vision-based features showed the highest accuracy among the three evaluation methods.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122307502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00043
Hearim Moon, Eunsik Park, Junghyun Moon, Juyeong Lee, Minji Lee, Doyoon Kim, Minsun Lee, E. Matson
Tropical cyclones are the world’s most deadly natural disasters, especially causing tree death by pulling out or breaking the roots of trees, which has a great impact on the forest ecosystem and forest owners. To minimize additional damage, an efficient approach is required to identify the location and distribution information of fallen trees. Several past studies have attempted to detect fallen trees, but most studies are expensive and difficult to utilize. Therefore, the purpose of this study is to solve these problems. Using a cost-effective, high-resolution secondary camera-equipped unmanned aerial vehicle (UAV) to collect data and use this data to train a YOLOX model, an object detection algorithm that can perform accurate detection in a very short time. The solution led in this study can be utilized in all scenarios that require low-cost and high-reliability object detection results. The experimental results show that our solution detected 88% of fallen trees in the image using YOLOX. The proposed model also implemented a visualization application that displays the detection results computed by the trained model in a client-friendly way. Our solution recognizes fallen trees as images or videos and presents the analysis results as a web-based visualization.
{"title":"Cost-Effective Solution for Fallen Tree Recognition Using YOLOX Object Detection","authors":"Hearim Moon, Eunsik Park, Junghyun Moon, Juyeong Lee, Minji Lee, Doyoon Kim, Minsun Lee, E. Matson","doi":"10.1109/IRC55401.2022.00043","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00043","url":null,"abstract":"Tropical cyclones are the world’s most deadly natural disasters, especially causing tree death by pulling out or breaking the roots of trees, which has a great impact on the forest ecosystem and forest owners. To minimize additional damage, an efficient approach is required to identify the location and distribution information of fallen trees. Several past studies have attempted to detect fallen trees, but most studies are expensive and difficult to utilize. Therefore, the purpose of this study is to solve these problems. Using a cost-effective, high-resolution secondary camera-equipped unmanned aerial vehicle (UAV) to collect data and use this data to train a YOLOX model, an object detection algorithm that can perform accurate detection in a very short time. The solution led in this study can be utilized in all scenarios that require low-cost and high-reliability object detection results. The experimental results show that our solution detected 88% of fallen trees in the image using YOLOX. The proposed model also implemented a visualization application that displays the detection results computed by the trained model in a client-friendly way. Our solution recognizes fallen trees as images or videos and presents the analysis results as a web-based visualization.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129272293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00008
Marvin Brenner, P. Stütz
This work provides the fundament for a gesture-based interaction system between cargo-handling unmanned aerial vehicles (UAVs) and ground personnel. The system has platforms with increasing levels of automation in mind and enables operators to visually communicate commands with higher abstractions through a minimum number of necessary gestures. The UAV supports the operator in monitoring surroundings and provides visual feedback in order to increase safety while carrying cargo near ground-level. The interaction concept intends to transfer two goal-directed control techniques to a cargo-handling use case: Object selection via deictic pointing and a proxy manipulation gesture are used to visually communicate intention and control the UAV’s flight. A visual processing pipeline to realize this challenge is presented along with first simulated evaluations of subcomponents.
{"title":"Towards Gesture-based Cooperation with Cargo Handling Unmanned Aerial Vehicles: A Conceptual Approach","authors":"Marvin Brenner, P. Stütz","doi":"10.1109/IRC55401.2022.00008","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00008","url":null,"abstract":"This work provides the fundament for a gesture-based interaction system between cargo-handling unmanned aerial vehicles (UAVs) and ground personnel. The system has platforms with increasing levels of automation in mind and enables operators to visually communicate commands with higher abstractions through a minimum number of necessary gestures. The UAV supports the operator in monitoring surroundings and provides visual feedback in order to increase safety while carrying cargo near ground-level. The interaction concept intends to transfer two goal-directed control techniques to a cargo-handling use case: Object selection via deictic pointing and a proxy manipulation gesture are used to visually communicate intention and control the UAV’s flight. A visual processing pipeline to realize this challenge is presented along with first simulated evaluations of subcomponents.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124023751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00035
Minh Trinh, Mohamed H. Behery, Mahmoud Emara, G. Lakemeyer, S. Storms, C. Brecher
Dynamics modeling of industrial robots using analytical models requires the complex identification of relevant parameters such as masses, centers of gravity as well as inertia tensors, which is often prone to error. Deep learning approaches have recently been used as an alternative. Here, the challenge lies not only in learning the temporal dependencies between the data points but also the dependencies between the attributes of each point. Long Short-term Memory networks (LSTMs) have been applied to this problem as the standard architecture for time series processing. However, LSTMs are not able to fully exploit parallellization capabilities that have emerged in the past decade leading to a time consuming training process. Transformer networks (transformers) have recently been introduced to overcome the long training times while learning temporal dependencies in the data. They can be further combined with convolutional layers to learn the dependencies between attributes for multivariate time series problems. In this paper we show that these transformers can be used to accurately learn the dynamics model of a robot. We train and test two variations of transformers, with and without convolutional layers, and compare their results to other models such as vector autoregression, extreme gradient boosting, and LSTM networks. The transformers, especially with convolution, outperformed the other models in terms of performance and prediction accuracy. Finally, the best performing network is evaluated regarding its prediction plausibility using a method from explainable artificial intelligence in order to increase the user’s trust.
{"title":"Dynamics Modeling of Industrial Robots Using Transformer Networks","authors":"Minh Trinh, Mohamed H. Behery, Mahmoud Emara, G. Lakemeyer, S. Storms, C. Brecher","doi":"10.1109/IRC55401.2022.00035","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00035","url":null,"abstract":"Dynamics modeling of industrial robots using analytical models requires the complex identification of relevant parameters such as masses, centers of gravity as well as inertia tensors, which is often prone to error. Deep learning approaches have recently been used as an alternative. Here, the challenge lies not only in learning the temporal dependencies between the data points but also the dependencies between the attributes of each point. Long Short-term Memory networks (LSTMs) have been applied to this problem as the standard architecture for time series processing. However, LSTMs are not able to fully exploit parallellization capabilities that have emerged in the past decade leading to a time consuming training process. Transformer networks (transformers) have recently been introduced to overcome the long training times while learning temporal dependencies in the data. They can be further combined with convolutional layers to learn the dependencies between attributes for multivariate time series problems. In this paper we show that these transformers can be used to accurately learn the dynamics model of a robot. We train and test two variations of transformers, with and without convolutional layers, and compare their results to other models such as vector autoregression, extreme gradient boosting, and LSTM networks. The transformers, especially with convolution, outperformed the other models in terms of performance and prediction accuracy. Finally, the best performing network is evaluated regarding its prediction plausibility using a method from explainable artificial intelligence in order to increase the user’s trust.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124132108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00075
Ilya Gukov, Alvis Logins
In this article, a method is presented for generating a trajectory through predetermined waypoints, with jerk and duration as two conflicting objectives. The method uses a Seq2Seq neural network model to approximate Pareto efficient solutions. It trains on a set of random trajectories optimized by Sequential Quadratic Programming (SQP) with a novel initialization strategy. We consider an example pick-and-place task for a robot manipulator. Based on several metrics, we show that our model generalizes over diverse paths, outperforms a genetic algorithm, SQP with naive initialization, and scaled time-optimal methods. At the same time, our model features a negligible GPU-accelerated inference time of 5ms that demonstrates applicability of the approach for real-time control.
{"title":"Real-time Multi-Objective Trajectory Optimization","authors":"Ilya Gukov, Alvis Logins","doi":"10.1109/IRC55401.2022.00075","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00075","url":null,"abstract":"In this article, a method is presented for generating a trajectory through predetermined waypoints, with jerk and duration as two conflicting objectives. The method uses a Seq2Seq neural network model to approximate Pareto efficient solutions. It trains on a set of random trajectories optimized by Sequential Quadratic Programming (SQP) with a novel initialization strategy. We consider an example pick-and-place task for a robot manipulator. Based on several metrics, we show that our model generalizes over diverse paths, outperforms a genetic algorithm, SQP with naive initialization, and scaled time-optimal methods. At the same time, our model features a negligible GPU-accelerated inference time of 5ms that demonstrates applicability of the approach for real-time control.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127556363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00038
João Correia, Plinio Moreno, João Avelino
Being able to anticipate actions is a critical part of many applications nowadays. One of them is autonomous driving, undoubtedly one of the most popular subjects today, where action anticipation can be used to help define how the vehicle should act next. In this work, we present a method for action anticipation in the autonomous driving scenario, specifically to anticipate pedestrian intentions. The method extracts movement features from a video sequence, to which we can add context information from other sensors. These features are used by a deep learning sequential model, which predicts the action being done by a pedestrian. Furthermore, we propose a skeleton completer, which can be used for many other applications. We also explore the concept of decisions under uncertainty, since this is a high risk scenario, and propose an effective method to decide whether or not to anticipate the action. Our methods obtain state of the art results in terms of the anticipation accuracy in two comprehensive datasets.
{"title":"Pedestrian Intention Anticipation with Uncertainty Based Decision for Autonomous Driving","authors":"João Correia, Plinio Moreno, João Avelino","doi":"10.1109/IRC55401.2022.00038","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00038","url":null,"abstract":"Being able to anticipate actions is a critical part of many applications nowadays. One of them is autonomous driving, undoubtedly one of the most popular subjects today, where action anticipation can be used to help define how the vehicle should act next. In this work, we present a method for action anticipation in the autonomous driving scenario, specifically to anticipate pedestrian intentions. The method extracts movement features from a video sequence, to which we can add context information from other sensors. These features are used by a deep learning sequential model, which predicts the action being done by a pedestrian. Furthermore, we propose a skeleton completer, which can be used for many other applications. We also explore the concept of decisions under uncertainty, since this is a high risk scenario, and propose an effective method to decide whether or not to anticipate the action. Our methods obtain state of the art results in terms of the anticipation accuracy in two comprehensive datasets.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132492821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00013
Sungjin Park, Haegyeong Im, Gayoung Yeom, Dayeon Won, Minji Kim, Xavier Lopez, Anthony H. Smith
To satisfy food demand following the increase in population, smart farms have emerged with Internet of Things (IoT), and most smart farms gather data using Low Power Wide Area Network (LPWAN) protocols such as LoRa. In this paper, a platform is designed to check real-time information of the smart farms based on LoRaWAN. It is intended to confirm efficiency by applying Kubernetes in this platform. Experiments are conducted to compare CPU usage, Request Per Seconds (RPS), and response times before and after using Kubernetes.
为了满足人口增长后的粮食需求,智能农场出现了物联网(IoT),大多数智能农场使用LoRa等低功耗广域网(LPWAN)协议收集数据。本文设计了一个基于LoRaWAN的智能农场实时信息检测平台。它旨在通过在该平台上应用Kubernetes来确认效率。通过实验比较了使用Kubernetes前后的CPU使用率、每秒请求数(Request Per Seconds, RPS)和响应时间。
{"title":"Performance Evaluation of Containerized Systems before and after using Kubernetes for Smart Farm Visualization Platform based on LoRaWAN","authors":"Sungjin Park, Haegyeong Im, Gayoung Yeom, Dayeon Won, Minji Kim, Xavier Lopez, Anthony H. Smith","doi":"10.1109/IRC55401.2022.00013","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00013","url":null,"abstract":"To satisfy food demand following the increase in population, smart farms have emerged with Internet of Things (IoT), and most smart farms gather data using Low Power Wide Area Network (LPWAN) protocols such as LoRa. In this paper, a platform is designed to check real-time information of the smart farms based on LoRaWAN. It is intended to confirm efficiency by applying Kubernetes in this platform. Experiments are conducted to compare CPU usage, Request Per Seconds (RPS), and response times before and after using Kubernetes.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132508145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00026
Matthias Stueben, Alexander Poeppel, W. Reif
Online monitoring of external forces and torques is highly important for safety and robustness in certain manipulation tasks and close interaction with humans. For fixed-base manipulators, methods using explicit dynamic models as well as neural networks are popular. In this paper, we address the problem of estimating external torques on a mobile manipulator, where the mobile base introduces additional dynamic effects on the manipulator joints. We adapt a model-based method that is established for fixed-base manipulators to the mobile manipulator case. We identify the relevant dynamic parameters and use a momentum observer for online torque estimation. A learning-based method using long short-term memory (LSTM) neural networks is presented afterwards. The accuracy of the two methods is compared in an evaluation with a real mobile manipulator with attached weights.
{"title":"External Torque Estimation for Mobile Manipulators: A Comparison of Model-based and LSTM Methods","authors":"Matthias Stueben, Alexander Poeppel, W. Reif","doi":"10.1109/IRC55401.2022.00026","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00026","url":null,"abstract":"Online monitoring of external forces and torques is highly important for safety and robustness in certain manipulation tasks and close interaction with humans. For fixed-base manipulators, methods using explicit dynamic models as well as neural networks are popular. In this paper, we address the problem of estimating external torques on a mobile manipulator, where the mobile base introduces additional dynamic effects on the manipulator joints. We adapt a model-based method that is established for fixed-base manipulators to the mobile manipulator case. We identify the relevant dynamic parameters and use a momentum observer for online torque estimation. A learning-based method using long short-term memory (LSTM) neural networks is presented afterwards. The accuracy of the two methods is compared in an evaluation with a real mobile manipulator with attached weights.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129390545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00011
Alessio Saccuti, Riccardo Monica, J. Aleotti
Mobile manipulators are attractive in industrial warehouses as they are capable of interacting safely with humans, they can navigate in operational spaces and they can be reconfigured for different tasks, thus overcoming limitations of fixed robot cells. This paper presents a comparative analysis of collaborative robots mounted on a mobile base for depalletization tasks in tight spaces due to storage racks. An experimental evaluation is reported in a simulated environment using a geometric motion planner with different robot configurations. Results indicate that the choice of the most appropriate robot to perform a depalletization task is not trivial as it depends on many factors. Therefore, conducting a simulation using motion planning is an effective strategy to evaluate the performance of different robots.
{"title":"A Comparative Analysis of Collaborative Robots for Autonomous Mobile Depalletizing Tasks","authors":"Alessio Saccuti, Riccardo Monica, J. Aleotti","doi":"10.1109/IRC55401.2022.00011","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00011","url":null,"abstract":"Mobile manipulators are attractive in industrial warehouses as they are capable of interacting safely with humans, they can navigate in operational spaces and they can be reconfigured for different tasks, thus overcoming limitations of fixed robot cells. This paper presents a comparative analysis of collaborative robots mounted on a mobile base for depalletization tasks in tight spaces due to storage racks. An experimental evaluation is reported in a simulated environment using a geometric motion planner with different robot configurations. Results indicate that the choice of the most appropriate robot to perform a depalletization task is not trivial as it depends on many factors. Therefore, conducting a simulation using motion planning is an effective strategy to evaluate the performance of different robots.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133544942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}