Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310881
D. Padovani
The growing interest in energy efficiency, plug-and-play commissioning, and reduced maintenance for heavy-duty robotic manipulators directs towards self-contained, electro-hydraulic cylinders. These drives are characterized by extremely low damping that causes unwanted oscillations of the mechanical structure. Adding active damping to this class of energy-efficient architectures is essential. Hence, this paper bridges a literature gap by presenting a systematic comparison grounded on a model-based tuning of both pressure and acceleration feedback. It is shown that both approaches increase the system damping hugely and improve the performance of the linear system. Acceleration feedback should be preferred since it only modifies the damping. However, high-pass-filtered pressure feedback can represent a satisfactory alternative.
{"title":"Adding Active Damping to Energy-Efficient Electro-Hydraulic Systems for Robotic Manipulators — Comparing Pressure and Acceleration Feedback","authors":"D. Padovani","doi":"10.1109/ICRAE50850.2020.9310881","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310881","url":null,"abstract":"The growing interest in energy efficiency, plug-and-play commissioning, and reduced maintenance for heavy-duty robotic manipulators directs towards self-contained, electro-hydraulic cylinders. These drives are characterized by extremely low damping that causes unwanted oscillations of the mechanical structure. Adding active damping to this class of energy-efficient architectures is essential. Hence, this paper bridges a literature gap by presenting a systematic comparison grounded on a model-based tuning of both pressure and acceleration feedback. It is shown that both approaches increase the system damping hugely and improve the performance of the linear system. Acceleration feedback should be preferred since it only modifies the damping. However, high-pass-filtered pressure feedback can represent a satisfactory alternative.","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"1865 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129918618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310850
Ignacio Cuiral-Zueco, G. López-Nicolás
An RGB-D based occlusion-handling camera position computation method for proper object perception has been designed and implemented. This proposal is an improved alternative to our previous optimisation-based approach where the contribution is twofold: this new method is geometric-based and it is also able to handle dynamic occlusions. This approach makes extensive use of a ray-projection model where a key aspect is that the solution space is defined within a sphere surface around the object. The method has been designed with a view to robotic applications and therefore provides robust and versatile features. Therefore, it does not require training nor prior knowledge of the scene, making it suitable for diverse applications and scenarios. Satisfactory results have been obtained with real time experiments.
{"title":"Dynamic Occlusion Handling for Real Time Object Perception","authors":"Ignacio Cuiral-Zueco, G. López-Nicolás","doi":"10.1109/ICRAE50850.2020.9310850","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310850","url":null,"abstract":"An RGB-D based occlusion-handling camera position computation method for proper object perception has been designed and implemented. This proposal is an improved alternative to our previous optimisation-based approach where the contribution is twofold: this new method is geometric-based and it is also able to handle dynamic occlusions. This approach makes extensive use of a ray-projection model where a key aspect is that the solution space is defined within a sphere surface around the object. The method has been designed with a view to robotic applications and therefore provides robust and versatile features. Therefore, it does not require training nor prior knowledge of the scene, making it suitable for diverse applications and scenarios. Satisfactory results have been obtained with real time experiments.","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114345765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310908
Guanchao Pan, A. Liang, Jinhui Liu, Mei Liu, E. X. Wang
Currently, positioning method based on code can only be realized in 2D space, which cannot be used in non-planar 3D environment. In order to achieve navigation and positioning in a universal 3D environment, this paper designs a monocular visual system to determine position and orientation using QR (quick response) code. This system can measure the 3D position and 3 Euler angles of the monocular camera in a physical coordinate system. The system is composed of two modules: image processing module and pose calculation module. The image processing module performs preprocessing on the obtained image, detecting and sorting of location points, correction and decoding of the QR codes. The pose calculation module first performs the calibration of the internal and external parameters of the camera. Then, using the correspondence between the 2D pixel coordinates and the 3D physical coordinates of the QR codes positioning markers, the relative pose information between the QR code and the camera is solved using efficient perspective-n-point camera pose estimation algorithm to determine the 3D position and posture of the robot and the unmanned vehicle.
{"title":"3-D Positioning System Based QR Code and Monocular Vision","authors":"Guanchao Pan, A. Liang, Jinhui Liu, Mei Liu, E. X. Wang","doi":"10.1109/ICRAE50850.2020.9310908","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310908","url":null,"abstract":"Currently, positioning method based on code can only be realized in 2D space, which cannot be used in non-planar 3D environment. In order to achieve navigation and positioning in a universal 3D environment, this paper designs a monocular visual system to determine position and orientation using QR (quick response) code. This system can measure the 3D position and 3 Euler angles of the monocular camera in a physical coordinate system. The system is composed of two modules: image processing module and pose calculation module. The image processing module performs preprocessing on the obtained image, detecting and sorting of location points, correction and decoding of the QR codes. The pose calculation module first performs the calibration of the internal and external parameters of the camera. Then, using the correspondence between the 2D pixel coordinates and the 3D physical coordinates of the QR codes positioning markers, the relative pose information between the QR code and the camera is solved using efficient perspective-n-point camera pose estimation algorithm to determine the 3D position and posture of the robot and the unmanned vehicle.","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133061568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310865
Abhidipta Mallik, V. Kapila
Recent years have witnessed several educational innovations to provide effective and engaging classroom instruction with the integration of immersive interactions based on augmented reality and virtual reality (AR/VR). This paper outlines the development of an ARCore-based application (app) that can impart interactive experiences for hands-on learning in engineering laboratories. The ARCore technology enables a smartphone to sense its environment and detect horizontal and vertical surfaces, thus allowing the smartphone to estimate any position in its workspace. In this mobile app, with touch-based interaction and AR feedback, the user can interact with a wheeled mobile robot and reinforce the concepts of kinematics for a differential drive mobile robot. The user experience is evaluated and system performance is validated through a user study with participants. The assessment shows that the proposed AR interface for interacting with the experimental setup is intuitive, easy to use, exciting, and recommendable.
{"title":"Interactive Learning of Mobile Robots Kinematics Using ARCore","authors":"Abhidipta Mallik, V. Kapila","doi":"10.1109/ICRAE50850.2020.9310865","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310865","url":null,"abstract":"Recent years have witnessed several educational innovations to provide effective and engaging classroom instruction with the integration of immersive interactions based on augmented reality and virtual reality (AR/VR). This paper outlines the development of an ARCore-based application (app) that can impart interactive experiences for hands-on learning in engineering laboratories. The ARCore technology enables a smartphone to sense its environment and detect horizontal and vertical surfaces, thus allowing the smartphone to estimate any position in its workspace. In this mobile app, with touch-based interaction and AR feedback, the user can interact with a wheeled mobile robot and reinforce the concepts of kinematics for a differential drive mobile robot. The user experience is evaluated and system performance is validated through a user study with participants. The assessment shows that the proposed AR interface for interacting with the experimental setup is intuitive, easy to use, exciting, and recommendable.","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133190103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310888
Chinmay Patil, Sumit Tanpure, A. Lohiya, Sudesh Pawar, Pratik P. Mohite
Our chief objective is to prototype a vehicle that can collect garbage floating on the surface of a water body (lake, pond, river, sea, or ocean) as well as the shorelines and riverbeds, without any major human assistance or more than one person being involved. The collection process of the garbage has automated, i.e. the vehicle will detect the garbage and will make it move accordingly to be picked up by the collection system. The recognition part is done using a deep learning model that detects a selective class of objects. After identifying the trash, it will place itself in line with the object. For the mobility of the boat, twin screw propellers have been used, as shown in Fig. 1, to facilitate smooth movement on land as well as in water. A separate compartment is allocated in the vehicle where all the collected trash is transferred via a conveyor system. An alternative remote control is used for controlling the boat. Since the DL model is not 100% accurate, we have kept remote control as an option that is required in the initial stage of the movement and for the shift between different terrains
{"title":"Autonomous Amphibious Vehicle for Monitoring and Collecting Marine Debris","authors":"Chinmay Patil, Sumit Tanpure, A. Lohiya, Sudesh Pawar, Pratik P. Mohite","doi":"10.1109/ICRAE50850.2020.9310888","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310888","url":null,"abstract":"Our chief objective is to prototype a vehicle that can collect garbage floating on the surface of a water body (lake, pond, river, sea, or ocean) as well as the shorelines and riverbeds, without any major human assistance or more than one person being involved. The collection process of the garbage has automated, i.e. the vehicle will detect the garbage and will make it move accordingly to be picked up by the collection system. The recognition part is done using a deep learning model that detects a selective class of objects. After identifying the trash, it will place itself in line with the object. For the mobility of the boat, twin screw propellers have been used, as shown in Fig. 1, to facilitate smooth movement on land as well as in water. A separate compartment is allocated in the vehicle where all the collected trash is transferred via a conveyor system. An alternative remote control is used for controlling the boat. Since the DL model is not 100% accurate, we have kept remote control as an option that is required in the initial stage of the movement and for the shift between different terrains","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121552163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310911
Anh Vo Ngoc Tram, M. Raweewan
This study aims to design a semi-automatic assembly line that is relevant to human-robot task allocation problems. It combines two methods, which are Design for Assembly (DFA) and optimization. First, the DFA difficulty score of each task including inspection is evaluated when it is performed by humans and robots. The score is then put in an optimization model. A mathematical model optimally assigns tasks to humans and robots with a feasible sequence. The proposed mathematical models are illustrated on a Lego-car assembly with two demand scenarios, being low and high. Results show that while three single objective models do not provide good solutions, a multi-objective linear problem (MOLP) minimizing a total cost, a cycle time, and difficulty scores altogether provides a better solution. The weights of objectives in MOLP are determined by a modified two-person zero-sum game with a weighted sum method.
{"title":"Optimal Task Allocation in Human-Robotic Assembly Processes","authors":"Anh Vo Ngoc Tram, M. Raweewan","doi":"10.1109/ICRAE50850.2020.9310911","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310911","url":null,"abstract":"This study aims to design a semi-automatic assembly line that is relevant to human-robot task allocation problems. It combines two methods, which are Design for Assembly (DFA) and optimization. First, the DFA difficulty score of each task including inspection is evaluated when it is performed by humans and robots. The score is then put in an optimization model. A mathematical model optimally assigns tasks to humans and robots with a feasible sequence. The proposed mathematical models are illustrated on a Lego-car assembly with two demand scenarios, being low and high. Results show that while three single objective models do not provide good solutions, a multi-objective linear problem (MOLP) minimizing a total cost, a cycle time, and difficulty scores altogether provides a better solution. The weights of objectives in MOLP are determined by a modified two-person zero-sum game with a weighted sum method.","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115634809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310810
Matheus P. Sanches, R. A. P. Faria, S. Cunha
The use of Unmanned Aerial Vehicles (UAVs) to perform complex tasks is very common today. The private transport sector will soon experience a significant change with the arrival of a new concept, the electric Vertical Take-Off and Landing (eVTOL) aircraft. These autonomous vehicles will, at first, be used in conjunction with other manned aircraft. That implies that the eVTOL aircraft will have to use the current civil airspace, where the aviation authorities require full compliance with the regulations that apply to General Aviation. This paper proposes a complete prototype of a Collision Avoidance System (CAS) for VTOL UAVs using civil airspace under the regulations of the Visual Flight Rules (VFR). The entire study corresponding to this CAS was implemented and tested in micro quadcopters that have the premise of bringing results with an experimental value that can grant proof of concept to the entire system.
{"title":"Visual Flight Rules-based Collision Avoidance System for VTOL UAV","authors":"Matheus P. Sanches, R. A. P. Faria, S. Cunha","doi":"10.1109/ICRAE50850.2020.9310810","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310810","url":null,"abstract":"The use of Unmanned Aerial Vehicles (UAVs) to perform complex tasks is very common today. The private transport sector will soon experience a significant change with the arrival of a new concept, the electric Vertical Take-Off and Landing (eVTOL) aircraft. These autonomous vehicles will, at first, be used in conjunction with other manned aircraft. That implies that the eVTOL aircraft will have to use the current civil airspace, where the aviation authorities require full compliance with the regulations that apply to General Aviation. This paper proposes a complete prototype of a Collision Avoidance System (CAS) for VTOL UAVs using civil airspace under the regulations of the Visual Flight Rules (VFR). The entire study corresponding to this CAS was implemented and tested in micro quadcopters that have the premise of bringing results with an experimental value that can grant proof of concept to the entire system.","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126285711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310883
Daniel Louback da Silva Lubanco, Gerhard Kaineder, Martin Scherhäufl, T. Schlechter, Daniel Salmen
This paper compares the reflectivity of signals transmitted by radars, lidar and ultrasonic when interacting with different materials, namely wood, glass, mirror, and polystyrene. Both individual sensors’ performance and sensor fusion approaches using data originating from various sensors have been analyzed. Moreover, the experiments were performed having the sensors attached to the robotic platform turtlebot2 because the further goal is to associate each sensor’s strength for an autonomous robot.
{"title":"A Comparison about the Reflectivity of Different Materials with Active Sensors","authors":"Daniel Louback da Silva Lubanco, Gerhard Kaineder, Martin Scherhäufl, T. Schlechter, Daniel Salmen","doi":"10.1109/ICRAE50850.2020.9310883","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310883","url":null,"abstract":"This paper compares the reflectivity of signals transmitted by radars, lidar and ultrasonic when interacting with different materials, namely wood, glass, mirror, and polystyrene. Both individual sensors’ performance and sensor fusion approaches using data originating from various sensors have been analyzed. Moreover, the experiments were performed having the sensors attached to the robotic platform turtlebot2 because the further goal is to associate each sensor’s strength for an autonomous robot.","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130213820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310835
P. K. Prasad, W. Ertel
Understanding the environment is an essential capability for mobile service robots to achieve autonomy. While this has proven to be a very challenging task, researchers have come up with a variety of solutions to enable robots to interact with and understand the environment around them. This capability allows robots to make more intelligent decisions using acquired knowledge. In parallel, equipping robots with situation awareness capability is also of equal importance since the semantic context of various entities can change with the change in the situation. This paper aims to deliver a review of the state of the art artificial intelligence approaches taken by researchers to enable robots to acquire and process knowledge of their environment and situation leading to robotic awareness. The focal points include various techniques used for knowledge acquisition, decision management, reasoning, situation awareness, human-robot interaction, and planning. The goal is to compile the modern techniques used in this field based on which future research directions could be defined.
{"title":"Knowledge Acquisition and Reasoning Systems for Service Robots: A Short Review of the State of the Art","authors":"P. K. Prasad, W. Ertel","doi":"10.1109/ICRAE50850.2020.9310835","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310835","url":null,"abstract":"Understanding the environment is an essential capability for mobile service robots to achieve autonomy. While this has proven to be a very challenging task, researchers have come up with a variety of solutions to enable robots to interact with and understand the environment around them. This capability allows robots to make more intelligent decisions using acquired knowledge. In parallel, equipping robots with situation awareness capability is also of equal importance since the semantic context of various entities can change with the change in the situation. This paper aims to deliver a review of the state of the art artificial intelligence approaches taken by researchers to enable robots to acquire and process knowledge of their environment and situation leading to robotic awareness. The focal points include various techniques used for knowledge acquisition, decision management, reasoning, situation awareness, human-robot interaction, and planning. The goal is to compile the modern techniques used in this field based on which future research directions could be defined.","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131126258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-20DOI: 10.1109/ICRAE50850.2020.9310862
Daniel Louback da Silva Lubanco, Markus Pichler-Scheder, T. Schlechter, Martin Scherhäufl, C. Kastl
Frontier-based exploration can be used in a wide-range of robotics applications. For example, a common utilization of exploration algorithms is in Urban Search and Rescue (USAR). In order to define the next place a robot shall visit, cost or utility functions in frontier-based exploration are often employed. In this paper we compare five different cost or utility functions known in the scientific community which are used in frontierbased exploration algorithms. This paper seeks to address two main goals: (1) to provide and explain utility and cost functions which are used in the exploration problem along with steps of how to implement them; (2) to show the results of each algorithm using a simulated environment. Finally, a discussion about the peculiarities and notable differences between the exploration methods as well as the consequences of their differences is presented.
{"title":"A Review of Utility and Cost Functions Used in Frontier-Based Exploration Algorithms","authors":"Daniel Louback da Silva Lubanco, Markus Pichler-Scheder, T. Schlechter, Martin Scherhäufl, C. Kastl","doi":"10.1109/ICRAE50850.2020.9310862","DOIUrl":"https://doi.org/10.1109/ICRAE50850.2020.9310862","url":null,"abstract":"Frontier-based exploration can be used in a wide-range of robotics applications. For example, a common utilization of exploration algorithms is in Urban Search and Rescue (USAR). In order to define the next place a robot shall visit, cost or utility functions in frontier-based exploration are often employed. In this paper we compare five different cost or utility functions known in the scientific community which are used in frontierbased exploration algorithms. This paper seeks to address two main goals: (1) to provide and explain utility and cost functions which are used in the exploration problem along with steps of how to implement them; (2) to show the results of each algorithm using a simulated environment. Finally, a discussion about the peculiarities and notable differences between the exploration methods as well as the consequences of their differences is presented.","PeriodicalId":296832,"journal":{"name":"2020 5th International Conference on Robotics and Automation Engineering (ICRAE)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134511827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}