Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125812
M. A. Mallouh, W. Araydah, Basel Jouda, M. Al-Khawaldeh
Pneumatic Artificial Muscles (PAMs) are widely used in the fields of biorobots and medicine due to their flexibility, safe usage, lack of mechanical wear, low cost of manufacturing, and high ratio of power to weight. Obtaining an accurate PAM model is crucial for building a controller that obtains the required performance specifications. This study aims to create various models for a PAM and to evaluate them with respect to their accuracy in reflecting PAM behavior. An experimental-based modeling approach was adopted to collect the necessary data in order to accurately model the PAM. The data were collected for different pressure setpoints and with different loads. Four system modeling techniques were utilized: (i) curve/surface fitting, (ii) Multi-Layer Perceptron Neural Network (MLP NN), (iii) Nonlinear Auto-Regressive with eXogenous (NARX NN) and (IV) Adaptive Neuro Fuzzy Inference System (ANFIS). The analysis of the four developed models showed that the performance of the MLP NN model exceeded all other models by having the smallest error. Therefore, a simple feedforward neural network can represent the complex muscle system compared to other complex modeling techniques.
{"title":"Comparative Modeling Study of Pneumatic Artificial Muscle Using Neural Networks, ANFIS and Curve Fitting","authors":"M. A. Mallouh, W. Araydah, Basel Jouda, M. Al-Khawaldeh","doi":"10.1109/ICARA56516.2023.10125812","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125812","url":null,"abstract":"Pneumatic Artificial Muscles (PAMs) are widely used in the fields of biorobots and medicine due to their flexibility, safe usage, lack of mechanical wear, low cost of manufacturing, and high ratio of power to weight. Obtaining an accurate PAM model is crucial for building a controller that obtains the required performance specifications. This study aims to create various models for a PAM and to evaluate them with respect to their accuracy in reflecting PAM behavior. An experimental-based modeling approach was adopted to collect the necessary data in order to accurately model the PAM. The data were collected for different pressure setpoints and with different loads. Four system modeling techniques were utilized: (i) curve/surface fitting, (ii) Multi-Layer Perceptron Neural Network (MLP NN), (iii) Nonlinear Auto-Regressive with eXogenous (NARX NN) and (IV) Adaptive Neuro Fuzzy Inference System (ANFIS). The analysis of the four developed models showed that the performance of the MLP NN model exceeded all other models by having the smallest error. Therefore, a simple feedforward neural network can represent the complex muscle system compared to other complex modeling techniques.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130464675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10126021
A. John, A. Opstal, Alexandre Bernardino
We propose a design for a bio-inspired robotic eye, with 6 independently controlled muscles, that is suitable for studying the emergence of human saccadic eye movements char-acteristics. Understanding how characteristics like the restriction of eye orientations to a 2D manifold, straight saccadic trajecto-ries, and saturating relationship between saccade amplitude and its peak velocity come about in a highly nonlinear system with non-commutativity of rotations is not trivial. Although earlier studies have addressed some of these problems, none have so far considered the full 3D complexity of ocular kinematics and dynamics. Our design contains a spherical eye actuated by six motor-driven cables with realistic pulling directions to mimic the six extraocular muscles. The coupling between the eyeball and eye socket has been designed to specify a damped rotational system, which is key to understanding the signals involved in the control of artificial and biological eyes. We present the mechanical design of the robotic system and a simulation model based on it. The system has a large range of movement and its dynamical responses to step inputs are shown, thus illustrating its ability to perform a wide range of eye movements with the appropriate characteristics.
{"title":"A Cable-Driven Robotic Eye for Understanding Eye-Movement Control","authors":"A. John, A. Opstal, Alexandre Bernardino","doi":"10.1109/ICARA56516.2023.10126021","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10126021","url":null,"abstract":"We propose a design for a bio-inspired robotic eye, with 6 independently controlled muscles, that is suitable for studying the emergence of human saccadic eye movements char-acteristics. Understanding how characteristics like the restriction of eye orientations to a 2D manifold, straight saccadic trajecto-ries, and saturating relationship between saccade amplitude and its peak velocity come about in a highly nonlinear system with non-commutativity of rotations is not trivial. Although earlier studies have addressed some of these problems, none have so far considered the full 3D complexity of ocular kinematics and dynamics. Our design contains a spherical eye actuated by six motor-driven cables with realistic pulling directions to mimic the six extraocular muscles. The coupling between the eyeball and eye socket has been designed to specify a damped rotational system, which is key to understanding the signals involved in the control of artificial and biological eyes. We present the mechanical design of the robotic system and a simulation model based on it. The system has a large range of movement and its dynamical responses to step inputs are shown, thus illustrating its ability to perform a wide range of eye movements with the appropriate characteristics.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134336807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125641
Yanling Zheng, Qingshan Liu, Guoyi Chi
In this paper, the task of multi-robot coordination monitor as an optimization problem is formulated. The whole cost function consists of the sum of local cost functions for each robot to evaluate the best location. To encircle the target, a global equality constraint is introduced, and convex sets are built for the feasibility constraints of robots' location. Then, a distributed discrete-time algorithm is developed for the task of multi-robot coordination monitor, and it is also proven to converge to an optimal solution of the established optimization problem under certain initial restriction. Finally, a numerical simulation shows the effectiveness of the proposed distributed optimization approach.
{"title":"A Discrete-time Distributed Optimization Algorithm for Multi-robot Coordination Target Monitor","authors":"Yanling Zheng, Qingshan Liu, Guoyi Chi","doi":"10.1109/ICARA56516.2023.10125641","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125641","url":null,"abstract":"In this paper, the task of multi-robot coordination monitor as an optimization problem is formulated. The whole cost function consists of the sum of local cost functions for each robot to evaluate the best location. To encircle the target, a global equality constraint is introduced, and convex sets are built for the feasibility constraints of robots' location. Then, a distributed discrete-time algorithm is developed for the task of multi-robot coordination monitor, and it is also proven to converge to an optimal solution of the established optimization problem under certain initial restriction. Finally, a numerical simulation shows the effectiveness of the proposed distributed optimization approach.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134051576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125600
N. Borkar, P. Krishnamurthy, A. Tzes, F. Khorrami
In this paper, we present a novel approach for autonomous navigation of quadrotors in complex unknown environments using tactile feedback. The approach uses an array of force/contact sensors on the quadrotor to determine local obstacle geometry and follow contours of sensed objects. The approach is particularly useful in scenarios where visibility is limited, such as in dark or smoky/foggy conditions, in which vision-based navigation is not possible. To show the efficacy of the proposed approach, we perform simulation studies in a variety of environments and demonstrate that the quadrotor is able to autonomously navigate without any a priori knowledge of the environment and without relying upon any vision-aided sensing of the environment.
{"title":"Autonomous Navigation of Quadrotors Using Tactile Feedback","authors":"N. Borkar, P. Krishnamurthy, A. Tzes, F. Khorrami","doi":"10.1109/ICARA56516.2023.10125600","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125600","url":null,"abstract":"In this paper, we present a novel approach for autonomous navigation of quadrotors in complex unknown environments using tactile feedback. The approach uses an array of force/contact sensors on the quadrotor to determine local obstacle geometry and follow contours of sensed objects. The approach is particularly useful in scenarios where visibility is limited, such as in dark or smoky/foggy conditions, in which vision-based navigation is not possible. To show the efficacy of the proposed approach, we perform simulation studies in a variety of environments and demonstrate that the quadrotor is able to autonomously navigate without any a priori knowledge of the environment and without relying upon any vision-aided sensing of the environment.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122538114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125860
Márk Domonkos, Ádám Tresó, János Botzheim
In this paper, we would like to emphasize the need for an intuitive and easy-to-understand way of communication during a Human-Robot Collaboration (HRC) mainly in industrial scenarios. With the new communication design, the mental demands of the human workforce during collaboration can be lowered by the feedback given by the robot in a situation-aware way. This kind of feedback in close cooperation can maintain high importance in a manufacturing scenario. Another goal of this paper is to present the progress of former research that similarly dealt with visual signals during HRC. The goal during the design of the proposed novel methodology was to make the research of visual gestures in Human-Robot Interactions more effective and flexible. To address these demands an online surveying application is introduced and an initial proof of concept nature test was also conducted. During the investigation, we introduced emotional states in the test as a supporting modality for later use. From the analysis, we concluded that visual signals do have properties that can affect the perception of the viewer.
{"title":"Online Surveying System for Experimentally Testing the Human Perception of Visual Gestures","authors":"Márk Domonkos, Ádám Tresó, János Botzheim","doi":"10.1109/ICARA56516.2023.10125860","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125860","url":null,"abstract":"In this paper, we would like to emphasize the need for an intuitive and easy-to-understand way of communication during a Human-Robot Collaboration (HRC) mainly in industrial scenarios. With the new communication design, the mental demands of the human workforce during collaboration can be lowered by the feedback given by the robot in a situation-aware way. This kind of feedback in close cooperation can maintain high importance in a manufacturing scenario. Another goal of this paper is to present the progress of former research that similarly dealt with visual signals during HRC. The goal during the design of the proposed novel methodology was to make the research of visual gestures in Human-Robot Interactions more effective and flexible. To address these demands an online surveying application is introduced and an initial proof of concept nature test was also conducted. During the investigation, we introduced emotional states in the test as a supporting modality for later use. From the analysis, we concluded that visual signals do have properties that can affect the perception of the viewer.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127929815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125866
Carlos Gutiérrez-Álvarez, Sergio Hernández-García, Nadia Nasri, Alfredo Cuesta-Infante, R. López-Sastre
In this paper we address the problem of visual semantic navigation (VSN), in which a robot needs to navigate through an environment to reach an object having only access to egocentric RGB perception sensors. This is a recently explored problem, where most of the approaches leverage last advances in deep learning models for visual perception, combined with reinforcement learning (RL) strategies. Nonetheless, after a review of the literature, it is complicated to perform direct comparisons between the different solutions. The main difficulties lie in the fact that the navigation environments in which the experimental metrics are reported are not accessible, and each approach uses different RL libraries. In this paper, we release a publicly available experimental setup for the VSN problem, with the aim of providing a clear benchmark. It has been constructed using pyRIL, an open source python library for RL, and two navigation environments: Miniwolrd-Maze from gym-miniworld, and one 3D scene from HM3D dataset using AI Habitat simulator. We finally propose a state-of-the-art VSN model, consisting in a Contrastive Language Image Pretraining (CLIP) visual encoder plus a set of two recurrent neural networks for producing the discrete navigation actions. This model is evaluated in the proposed experimental setup, with a careful analysis of the main VSN challenges, namely: the sparse rewards problem; and the exploitation-exploration trade-off. Code is available at: https://github.com/gramuah/vsn.
{"title":"Towards Clear Evaluation of Robotic Visual Semantic Navigation","authors":"Carlos Gutiérrez-Álvarez, Sergio Hernández-García, Nadia Nasri, Alfredo Cuesta-Infante, R. López-Sastre","doi":"10.1109/ICARA56516.2023.10125866","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125866","url":null,"abstract":"In this paper we address the problem of visual semantic navigation (VSN), in which a robot needs to navigate through an environment to reach an object having only access to egocentric RGB perception sensors. This is a recently explored problem, where most of the approaches leverage last advances in deep learning models for visual perception, combined with reinforcement learning (RL) strategies. Nonetheless, after a review of the literature, it is complicated to perform direct comparisons between the different solutions. The main difficulties lie in the fact that the navigation environments in which the experimental metrics are reported are not accessible, and each approach uses different RL libraries. In this paper, we release a publicly available experimental setup for the VSN problem, with the aim of providing a clear benchmark. It has been constructed using pyRIL, an open source python library for RL, and two navigation environments: Miniwolrd-Maze from gym-miniworld, and one 3D scene from HM3D dataset using AI Habitat simulator. We finally propose a state-of-the-art VSN model, consisting in a Contrastive Language Image Pretraining (CLIP) visual encoder plus a set of two recurrent neural networks for producing the discrete navigation actions. This model is evaluated in the proposed experimental setup, with a careful analysis of the main VSN challenges, namely: the sparse rewards problem; and the exploitation-exploration trade-off. Code is available at: https://github.com/gramuah/vsn.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133101588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125730
Kevin Neuschwander, Rolf Dornberger, T. Hanne
The A* algorithm is one of the most popular pathfinding algorithms. The basic algorithm can reliably find the best path for an agent in a static environment. However, there is only limited knowledge on how the algorithm behaves in a dynamic context. One important dynamic element in a pathfinding problem might be other agents moving simultaneously in the same environment, such as in the application scenarios of various real-time strategy games. With the basic $mathbf{A}^{*}$ algorithm, these agents could collide, especially when moving around an obstacle or through a narrow passage. To avoid collisions, a modified agent control is proposed. This extension consists of introducing a waiting time when an agent moves to a place where another agent is already located. The waiting time operator is directly included into the optimization algorithm, stimulating the algorithm to search for alternative routes that avoid these waiting times. The resulting routes might be longer in distance but could be faster because the agent avoids the other blockade. Experiments with different settings indicate that the algorithm achieves this goal: In all settings, including narrow passages, collisions between agents were no longer detected. Furthermore, searching for alternative routes helps the algorithm find paths which are more than 10% faster. The duration of the slowest path can also be reduced in 80% of cases.
{"title":"Collision Avoidance of Multiple Moving Agents by Adapting the A* Algorithm","authors":"Kevin Neuschwander, Rolf Dornberger, T. Hanne","doi":"10.1109/ICARA56516.2023.10125730","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125730","url":null,"abstract":"The A* algorithm is one of the most popular pathfinding algorithms. The basic algorithm can reliably find the best path for an agent in a static environment. However, there is only limited knowledge on how the algorithm behaves in a dynamic context. One important dynamic element in a pathfinding problem might be other agents moving simultaneously in the same environment, such as in the application scenarios of various real-time strategy games. With the basic $mathbf{A}^{*}$ algorithm, these agents could collide, especially when moving around an obstacle or through a narrow passage. To avoid collisions, a modified agent control is proposed. This extension consists of introducing a waiting time when an agent moves to a place where another agent is already located. The waiting time operator is directly included into the optimization algorithm, stimulating the algorithm to search for alternative routes that avoid these waiting times. The resulting routes might be longer in distance but could be faster because the agent avoids the other blockade. Experiments with different settings indicate that the algorithm achieves this goal: In all settings, including narrow passages, collisions between agents were no longer detected. Furthermore, searching for alternative routes helps the algorithm find paths which are more than 10% faster. The duration of the slowest path can also be reduced in 80% of cases.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123261665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125893
Halil Utku Unlu, Dimitris Chaikalis, Athanasios Tsoukalas, A. Tzes
This paper is concerned with the safe path planning of a drone while exploring an unknown space. The drone is localized by fusing measurements from sensors including an IMU, RGB-D sensor, and an optical flow system, while executing an RTAB-Map SLAM algorithm. The 3D-occupancy Octomap is generated online and a slicing algorithm is employed to compute 2D-maps. The maps' traversible coordinates are identified and used as potential points for the drone intermediate navigation to the destination. The final segment corresponds to a shortest Chebyshev-length path between all frontier pixels and the endpoint over the unexplored map region. The drone's path is computed using a skeletal path between the identified map boundaries so that the drone moves from its current location through the free map coordinates to the destination point. Simulation studies using within the interior of an apartment indicate the efficiency and effectiveness of the proposed method.
{"title":"UAV- Navigation Using Map Slicing and Safe Path Computation","authors":"Halil Utku Unlu, Dimitris Chaikalis, Athanasios Tsoukalas, A. Tzes","doi":"10.1109/ICARA56516.2023.10125893","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125893","url":null,"abstract":"This paper is concerned with the safe path planning of a drone while exploring an unknown space. The drone is localized by fusing measurements from sensors including an IMU, RGB-D sensor, and an optical flow system, while executing an RTAB-Map SLAM algorithm. The 3D-occupancy Octomap is generated online and a slicing algorithm is employed to compute 2D-maps. The maps' traversible coordinates are identified and used as potential points for the drone intermediate navigation to the destination. The final segment corresponds to a shortest Chebyshev-length path between all frontier pixels and the endpoint over the unexplored map region. The drone's path is computed using a skeletal path between the identified map boundaries so that the drone moves from its current location through the free map coordinates to the destination point. Simulation studies using within the interior of an apartment indicate the efficiency and effectiveness of the proposed method.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125094406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125673
Alberto Sartori, R. Waspe, Christian Schlette
With the increasing complexity of industrial systems, it is required to involve different components in robotic applications. The interfaces used to access information and control processes are of great importance for developing interconnected Industry 4.0 solutions. To improve the connectivity of the software used for research on industrial automation, we introduce an MQTT Extension that enables external clients to read and write the values of the properties used to digitally describe a robotic system. The integration of an MQTT interface allows parametrizing blocks in a Visual Programming environment, enabling external clients to ask for simulations of robot motions for a specific target position. The introduced functionality is demonstrated in two applications where the outcome of a program simulation is stored in a database and later executed in a real demonstrator using an industrial robot, and where a virtual robot provides on-demand motion simulations.
{"title":"MQTT Enabled Simulation Interface for Motion Execution of Industrial Robots","authors":"Alberto Sartori, R. Waspe, Christian Schlette","doi":"10.1109/ICARA56516.2023.10125673","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125673","url":null,"abstract":"With the increasing complexity of industrial systems, it is required to involve different components in robotic applications. The interfaces used to access information and control processes are of great importance for developing interconnected Industry 4.0 solutions. To improve the connectivity of the software used for research on industrial automation, we introduce an MQTT Extension that enables external clients to read and write the values of the properties used to digitally describe a robotic system. The integration of an MQTT interface allows parametrizing blocks in a Visual Programming environment, enabling external clients to ask for simulations of robot motions for a specific target position. The introduced functionality is demonstrated in two applications where the outcome of a program simulation is stored in a database and later executed in a real demonstrator using an industrial robot, and where a virtual robot provides on-demand motion simulations.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130939667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scraping is a key technology in high-precision machine tool machining. Scraping can eliminate the accumulated tolerances and improve the assembly accuracy of the machine tool. Scraping is a time-consuming and tedious manual labor, which is usually conducted by experienced technician. To overcome these shortcomings, a novel automatic scraping robot was designed and tested in this study. The robot includes a 3-axis moving mechanism, a vision recognition system, a 3-D measurement system, and a control system. In this study, milling is used to simulate the shape of tool marks in the traditional scraping process. After a series of tests, the designed robot has been verified to be able to perform automatic scraping work. The workpiece scraped with this robot meets the standard of a high precision machine.
{"title":"Design and Feasibility Test of an Automatic Scraping Robot","authors":"Zhenmeng Cui, Liang Han, Guancheng Dong, Yingze Lin, Yangzhen Gao, Shuaishuai Fan","doi":"10.1109/ICARA56516.2023.10125772","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125772","url":null,"abstract":"Scraping is a key technology in high-precision machine tool machining. Scraping can eliminate the accumulated tolerances and improve the assembly accuracy of the machine tool. Scraping is a time-consuming and tedious manual labor, which is usually conducted by experienced technician. To overcome these shortcomings, a novel automatic scraping robot was designed and tested in this study. The robot includes a 3-axis moving mechanism, a vision recognition system, a 3-D measurement system, and a control system. In this study, milling is used to simulate the shape of tool marks in the traditional scraping process. After a series of tests, the designed robot has been verified to be able to perform automatic scraping work. The workpiece scraped with this robot meets the standard of a high precision machine.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129567435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}