Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00082
Benjamín Tapia Sal Paz, Gorka Sorrosal, Aitziber Mancisidor
Big steps in the last years have been made in robotics. From mobile robots for home tasks to fully automatized systems in industrial environments. In the beginning, the main focus of robotics was to provide robotics solutions to tackle the necessity of improving both, productivity in repetitive tasks and safeguarding people in dangerous environments. Nowadays, following the advances in technology and industry 4.0, these objectives have changed to more demanding ones. These require flexible and autonomous intelligent solutions, i.e. systems capable of performing a variety of tasks with the minimum programming or system specifications. With the rise of Artificial Intelligence, novel algorithms have been developed, and let improve robotics systems capabilities by becoming more intelligent and autonomous. The aim of this work is the development of an adaptative intelligent robotic system for physical interaction tasks. In this kind of task, the robot has a strong physical interaction with the environment, driving dynamical requirements to fulfill the task. To achieve this, a Three system framework made up of control, monitoring, and adaptative systems is proposed.
{"title":"Intelligent Adaptative Robotic System for Physical Interaction Tasks","authors":"Benjamín Tapia Sal Paz, Gorka Sorrosal, Aitziber Mancisidor","doi":"10.1109/IRC55401.2022.00082","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00082","url":null,"abstract":"Big steps in the last years have been made in robotics. From mobile robots for home tasks to fully automatized systems in industrial environments. In the beginning, the main focus of robotics was to provide robotics solutions to tackle the necessity of improving both, productivity in repetitive tasks and safeguarding people in dangerous environments. Nowadays, following the advances in technology and industry 4.0, these objectives have changed to more demanding ones. These require flexible and autonomous intelligent solutions, i.e. systems capable of performing a variety of tasks with the minimum programming or system specifications. With the rise of Artificial Intelligence, novel algorithms have been developed, and let improve robotics systems capabilities by becoming more intelligent and autonomous. The aim of this work is the development of an adaptative intelligent robotic system for physical interaction tasks. In this kind of task, the robot has a strong physical interaction with the environment, driving dynamical requirements to fulfill the task. To achieve this, a Three system framework made up of control, monitoring, and adaptative systems is proposed.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134026106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00056
Myunghyun Kim, Sungwoo Yang, Soo-Hyek Kang, Wonha Kim, D. Kim
Many studies utilize reinforcement learning in simulation environments to control robots. Since simulation environments do not provide reinforcement learning environments for all robots, it is important for researchers to choose a simulation environment with the robots they use. This paper adds and expands a new robot-platform to the robot-gym environment, a reinforcement learning framework used in the Gazebo simulation environment. The added robot-platform is Husky-ur3, a mobile manipulator robot, and it can recognize the coordinates of the target point by itself through the camera. It was confirmed that the mobile manipulator learning environment was well established through experiments of recognizing and following target.
{"title":"Implemention of Reinforcement Learning Environment for Mobile Manipulator Using Robo-gym","authors":"Myunghyun Kim, Sungwoo Yang, Soo-Hyek Kang, Wonha Kim, D. Kim","doi":"10.1109/IRC55401.2022.00056","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00056","url":null,"abstract":"Many studies utilize reinforcement learning in simulation environments to control robots. Since simulation environments do not provide reinforcement learning environments for all robots, it is important for researchers to choose a simulation environment with the robots they use. This paper adds and expands a new robot-platform to the robot-gym environment, a reinforcement learning framework used in the Gazebo simulation environment. The added robot-platform is Husky-ur3, a mobile manipulator robot, and it can recognize the coordinates of the target point by itself through the camera. It was confirmed that the mobile manipulator learning environment was well established through experiments of recognizing and following target.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134517377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00055
Sungwoo Yang, Sumin Kang, Myunghyun Kim, D. Kim
As the utilization of robots in indoor environments increases, it has become common for humans and robots to co-exist in such environments. Most human-aware navigation algorithms only considered humans in the robot's field of view. However, in cases of L-shape corridors, there is a high possibility that human suddenly appears. To deal with this situation, we propose an improved corner detection algorithm and a novel waypoint planner, WPC. The proposed algorithm is validated through simulations using PedSim and Gazebo.
{"title":"Human-Aware Waypoint Planner for Mobile Robot in Indoor Environments","authors":"Sungwoo Yang, Sumin Kang, Myunghyun Kim, D. Kim","doi":"10.1109/IRC55401.2022.00055","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00055","url":null,"abstract":"As the utilization of robots in indoor environments increases, it has become common for humans and robots to co-exist in such environments. Most human-aware navigation algorithms only considered humans in the robot's field of view. However, in cases of L-shape corridors, there is a high possibility that human suddenly appears. To deal with this situation, we propose an improved corner detection algorithm and a novel waypoint planner, WPC. The proposed algorithm is validated through simulations using PedSim and Gazebo.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121940727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/IRC55401.2022.00053
D. D’Auria, Fabio Persia
Due to the COVID-19 pandemic, there has been a significant increase in the development of medical apps worldwide in recent years, both in research projects and in industry. However, unfortunately the development of such apps has often been significantly slowed down, if not stopped, due to bureaucratic problems frequently related to privacy. Therefore, in this paper we aim to summarize regulatory aspects and privacy protection in the context of medical apps, in order to provide suggestions and guidelines for app designers and developers.
{"title":"Privacy Protection and Regulatory Aspects in the context of Medical Apps","authors":"D. D’Auria, Fabio Persia","doi":"10.1109/IRC55401.2022.00053","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00053","url":null,"abstract":"Due to the COVID-19 pandemic, there has been a significant increase in the development of medical apps worldwide in recent years, both in research projects and in industry. However, unfortunately the development of such apps has often been significantly slowed down, if not stopped, due to bureaucratic problems frequently related to privacy. Therefore, in this paper we aim to summarize regulatory aspects and privacy protection in the context of medical apps, in order to provide suggestions and guidelines for app designers and developers.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121529002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/IRC55401.2022.00041
Julian Hau, S. Bultmann, Sven Behnke
Autonomous robots that interact with their environment require a detailed semantic scene model. For this, volumetric semantic maps are frequently used. The scene understanding can further be improved by including object-level information in the map. In this work, we extend a multi-view 3D semantic mapping system consisting of a network of distributed smart edge sensors with object-level information, to enable downstream tasks that need object-level input. Objects are represented in the map via their 3D mesh model or as an object-centric volumetric sub-map that can model arbitrary object geometry when no detailed 3D model is available. We propose a keypoint-based approach to estimate object poses via PnP and refinement via ICP alignment of the 3D object model with the observed point cloud segments. Object instances are tracked to integrate observations over time and to be robust against temporary occlusions. Our method is evaluated on the public Behave dataset where it shows pose estimation accuracy within a few centimeters and in real-world experiments with the sensor network in a challenging lab environment where multiple chairs and a table are tracked through the scene online, in real time even under high occlusions.
{"title":"Object-level 3D Semantic Mapping using a Network of Smart Edge Sensors","authors":"Julian Hau, S. Bultmann, Sven Behnke","doi":"10.1109/IRC55401.2022.00041","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00041","url":null,"abstract":"Autonomous robots that interact with their environment require a detailed semantic scene model. For this, volumetric semantic maps are frequently used. The scene understanding can further be improved by including object-level information in the map. In this work, we extend a multi-view 3D semantic mapping system consisting of a network of distributed smart edge sensors with object-level information, to enable downstream tasks that need object-level input. Objects are represented in the map via their 3D mesh model or as an object-centric volumetric sub-map that can model arbitrary object geometry when no detailed 3D model is available. We propose a keypoint-based approach to estimate object poses via PnP and refinement via ICP alignment of the 3D object model with the observed point cloud segments. Object instances are tracked to integrate observations over time and to be robust against temporary occlusions. Our method is evaluated on the public Behave dataset where it shows pose estimation accuracy within a few centimeters and in real-world experiments with the sensor network in a challenging lab environment where multiple chairs and a table are tracked through the scene online, in real time even under high occlusions.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134130282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/IRC55401.2022.00044
Arul Selvam Periyasamy, Luis Denninger, Sven Behnke
Object pose estimation is a necessary prerequisite for autonomous robotic manipulation, but the presence of symmetry increases the complexity of the pose estimation task. Existing methods for object pose estimation output a single 6D pose. Thus, they lack the ability to reason about symmetries. Lately, modeling object orientation as a non-parametric probability distribution on the SO❨3❩ manifold by neural networks has shown impressive results. However, acquiring large-scale datasets to train pose estimation models remains a bottleneck. To address this limitation, we introduce an automatic pose labeling scheme. Given RGB-D images without object pose annotations and 3D object models, we design a two-stage pipeline consisting of point cloud registration and render-and-compare validation to generate multiple symmetrical pseudo-ground-truth pose labels for each image. Using the generated pose labels, we train an ImplicitPDF model to estimate the likelihood of an orientation hypothesis given an RGB image. An efficient hierarchical sampling of the SO❨3❩ manifold enables tractable generation of the complete set of symmetries at multiple resolutions. During inference, the most likely orientation of the target object is estimated using gradient ascent. We evaluate the proposed automatic pose labeling scheme and the ImplicitPDF model on a photorealistic dataset and the T-Less dataset, demonstrating the advantages of the proposed method.
{"title":"Learning Implicit Probability Distribution Functions for Symmetric Orientation Estimation from RGB Images Without Pose Labels","authors":"Arul Selvam Periyasamy, Luis Denninger, Sven Behnke","doi":"10.1109/IRC55401.2022.00044","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00044","url":null,"abstract":"Object pose estimation is a necessary prerequisite for autonomous robotic manipulation, but the presence of symmetry increases the complexity of the pose estimation task. Existing methods for object pose estimation output a single 6D pose. Thus, they lack the ability to reason about symmetries. Lately, modeling object orientation as a non-parametric probability distribution on the SO❨3❩ manifold by neural networks has shown impressive results. However, acquiring large-scale datasets to train pose estimation models remains a bottleneck. To address this limitation, we introduce an automatic pose labeling scheme. Given RGB-D images without object pose annotations and 3D object models, we design a two-stage pipeline consisting of point cloud registration and render-and-compare validation to generate multiple symmetrical pseudo-ground-truth pose labels for each image. Using the generated pose labels, we train an ImplicitPDF model to estimate the likelihood of an orientation hypothesis given an RGB image. An efficient hierarchical sampling of the SO❨3❩ manifold enables tractable generation of the complete set of symmetries at multiple resolutions. During inference, the most likely orientation of the target object is estimated using gradient ascent. We evaluate the proposed automatic pose labeling scheme and the ImplicitPDF model on a photorealistic dataset and the T-Less dataset, demonstrating the advantages of the proposed method.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116346768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-21DOI: 10.1109/IRC55401.2022.00027
M. Hosseini, D. Rodriguez, Sven Behnke
Fast and versatile locomotion can be achieved with wheeled quadruped robots that drive quickly on flat terrain, but are also able to overcome challenging terrain by adapting their body pose and by making steps. In this paper, we present a state estimation approach for four-legged robots with non-steerable wheels that enables hybrid driving-stepping locomotion capabilities. We formulate a Kalman Filter (KF) for state estimation that integrates driven wheels into the filter equations and estimates the robot state (position and velocity) as well as the contribution of driving with wheels to the above state. Our estimation approach allows us to use the control framework of the Mini Cheetah quadruped robot with minor modifications. We tested our approach on this robot that we augmented with actively driven wheels in simulation and in the real world. The experimental results are available at https://www.ais.uni-bonn.de/~hosseini/se-dsq.
{"title":"State Estimation for Hybrid Locomotion of Driving-Stepping Quadrupeds","authors":"M. Hosseini, D. Rodriguez, Sven Behnke","doi":"10.1109/IRC55401.2022.00027","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00027","url":null,"abstract":"Fast and versatile locomotion can be achieved with wheeled quadruped robots that drive quickly on flat terrain, but are also able to overcome challenging terrain by adapting their body pose and by making steps. In this paper, we present a state estimation approach for four-legged robots with non-steerable wheels that enables hybrid driving-stepping locomotion capabilities. We formulate a Kalman Filter (KF) for state estimation that integrates driven wheels into the filter equations and estimates the robot state (position and velocity) as well as the contribution of driving with wheels to the above state. Our estimation approach allows us to use the control framework of the Mini Cheetah quadruped robot with minor modifications. We tested our approach on this robot that we augmented with actively driven wheels in simulation and in the real world. The experimental results are available at https://www.ais.uni-bonn.de/~hosseini/se-dsq.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122104122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-20DOI: 10.1109/IRC55401.2022.00034
Malte Mosbach, Sven Behnke
Grasping objects of different shapes and sizes-a foundational, effortless skill for humans-remains a challenging task in robotics. Although model-based approaches can predict stable grasp configurations for known object models, they struggle to generalize to novel objects and often operate in a non-interactive open-loop manner. In this work, we present a reinforcement learning framework that learns the interactive grasping of various geometrically distinct real-world objects by continuously controlling an anthropomorphic robotic hand. We explore several explicit representations of object geometry as input to the policy. Moreover, we propose to inform the policy implicitly through signed distances and show that this is naturally suited to guide the search through a shaped reward component. Finally, we demonstrate that the proposed framework is able to learn even in more challenging conditions, such as targeted grasping from a cluttered bin. Necessary pre-grasping behaviors such as object reorientation and utilization of environmental constraints emerge in this case. Videos of learned interactive policies are available at https://maltemosbach.github.io/geometry_aware_grasping policies.
{"title":"Efficient Representations of Object Geometry for Reinforcement Learning of Interactive Grasping Policies","authors":"Malte Mosbach, Sven Behnke","doi":"10.1109/IRC55401.2022.00034","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00034","url":null,"abstract":"Grasping objects of different shapes and sizes-a foundational, effortless skill for humans-remains a challenging task in robotics. Although model-based approaches can predict stable grasp configurations for known object models, they struggle to generalize to novel objects and often operate in a non-interactive open-loop manner. In this work, we present a reinforcement learning framework that learns the interactive grasping of various geometrically distinct real-world objects by continuously controlling an anthropomorphic robotic hand. We explore several explicit representations of object geometry as input to the policy. Moreover, we propose to inform the policy implicitly through signed distances and show that this is naturally suited to guide the search through a shaped reward component. Finally, we demonstrate that the proposed framework is able to learn even in more challenging conditions, such as targeted grasping from a cluttered bin. Necessary pre-grasping behaviors such as object reorientation and utilization of environmental constraints emerge in this case. Videos of learned interactive policies are available at https://maltemosbach.github.io/geometry_aware_grasping policies.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115058349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-17DOI: 10.1109/IRC55401.2022.00015
Nuno Guedelha, Venus Pasandi, Giuseppe L’Erario, Silvio Traversaro, Daniele Pucci Istituto Italiano di Tecnologia, Genova, Italy
Physics simulators are widely used in robotics fields, from mechanical design to dynamic simulation, and controller design. This paper presents an open-source MATLAB/Simulink simulator for rigid-body articulated systems, including manipulators and floating-base robots. Thanks to MATLAB/Simulink features like MATLAB system classes and Simulink function blocks, the presented simulator combines a programmatic and block-based approach, resulting in a flexible design in the sense that different parts, including its physics engine, robot-ground interaction model, and state evolution algorithm are simply accessible and editable. Moreover, through the use of Simulink dynamic mask blocks, the proposed simulation framework supports robot models integrating open-chain and closed-chain kinematics with any desired number of links interacting with the ground. The simulator can also integrate second-order actuator dynamics. Furthermore, the simulator benefits from a one-line installation and an easy-to-use Simulink interface.
{"title":"A Flexible MATLAB/Simulink Simulator for Robotic Floating-base Systems in Contact with the Ground","authors":"Nuno Guedelha, Venus Pasandi, Giuseppe L’Erario, Silvio Traversaro, Daniele Pucci Istituto Italiano di Tecnologia, Genova, Italy","doi":"10.1109/IRC55401.2022.00015","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00015","url":null,"abstract":"Physics simulators are widely used in robotics fields, from mechanical design to dynamic simulation, and controller design. This paper presents an open-source MATLAB/Simulink simulator for rigid-body articulated systems, including manipulators and floating-base robots. Thanks to MATLAB/Simulink features like MATLAB system classes and Simulink function blocks, the presented simulator combines a programmatic and block-based approach, resulting in a flexible design in the sense that different parts, including its physics engine, robot-ground interaction model, and state evolution algorithm are simply accessible and editable. Moreover, through the use of Simulink dynamic mask blocks, the proposed simulation framework supports robot models integrating open-chain and closed-chain kinematics with any desired number of links interacting with the ground. The simulator can also integrate second-order actuator dynamics. Furthermore, the simulator benefits from a one-line installation and an easy-to-use Simulink interface.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126611740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-15DOI: 10.1109/IRC55401.2022.00031
Annika Junker, Niklas Fittkau, Julia Timmermann, A. Trächtler
We are developing a self-learning mechatronic golf robot using combined data-driven and physics-based methods, to have the robot autonomously learn to putt the ball from an arbitrary point on the green. Apart from the mechatronic control design of the robot, this task is accomplished by a camera system with image recognition and a neural network for predicting the stroke velocity vector required for a successful hole-in-one. To minimize the number of time-consuming interactions with the real system, the neural network is pretrained by evaluating basic physical laws on a model, which approximates the golf ball dynamics on the green surface in a data-driven manner. Thus, we demonstrate the synergetic combination of data-driven and physics-based methods on the golf robot as a mechatronic example system.
{"title":"Autonomous Golf Putting with Data-Driven and Physics-Based Methods","authors":"Annika Junker, Niklas Fittkau, Julia Timmermann, A. Trächtler","doi":"10.1109/IRC55401.2022.00031","DOIUrl":"https://doi.org/10.1109/IRC55401.2022.00031","url":null,"abstract":"We are developing a self-learning mechatronic golf robot using combined data-driven and physics-based methods, to have the robot autonomously learn to putt the ball from an arbitrary point on the green. Apart from the mechatronic control design of the robot, this task is accomplished by a camera system with image recognition and a neural network for predicting the stroke velocity vector required for a successful hole-in-one. To minimize the number of time-consuming interactions with the real system, the neural network is pretrained by evaluating basic physical laws on a model, which approximates the golf ball dynamics on the green surface in a data-driven manner. Thus, we demonstrate the synergetic combination of data-driven and physics-based methods on the golf robot as a mechatronic example system.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126022799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}