Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551166
Chishio Sugiyama, Katsutoshi Itoyama, Kenji Nishida, K. Nakadai
This paper examines estimation of positions, orientations, and time offsets among multiple microphone arrays and resultant sound 10-cation. Conventional methods have limitations including requiring multiple steps for calibration, assuming synchronization between multiple microphone arrays, and necessity of a priori information, which results in convergence to a local optimal solution and large convergence time. Accordingly, we propose a novel calibration method that simultaneously optimizes positions and orientations of microphone arrays and the time offsets between them. Numerical simulations achieved accurate and fast calibration of microphone parameters without falling into a local optimum solution even when using asynchronous microphone arrays.
{"title":"Simultaneous Calibration of Positions, Orientations, and Time Offsets, Among Multiple Microphone Arrays","authors":"Chishio Sugiyama, Katsutoshi Itoyama, Kenji Nishida, K. Nakadai","doi":"10.1109/ICAS49788.2021.9551166","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551166","url":null,"abstract":"This paper examines estimation of positions, orientations, and time offsets among multiple microphone arrays and resultant sound 10-cation. Conventional methods have limitations including requiring multiple steps for calibration, assuming synchronization between multiple microphone arrays, and necessity of a priori information, which results in convergence to a local optimal solution and large convergence time. Accordingly, we propose a novel calibration method that simultaneously optimizes positions and orientations of microphone arrays and the time offsets between them. Numerical simulations achieved accurate and fast calibration of microphone parameters without falling into a local optimum solution even when using asynchronous microphone arrays.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114584759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551186
Ethan Shaotran, Jonathan J. Cruz, V. Reddi
Human-computer interaction (HCI) is crucial for safety as autonomous vehicles (AVs) become commonplace. Yet, little effort has been put toward ensuring that AVs understand human communications on the road. In this paper, we present Gesture Learning for Advanced Driver Assistance Systems (GLADAS), a deep learning-based self-driving car hand gesture recognition system developed and evaluated using virtual simulation. We focus on gestures as they are a natural and common way for pedestrians to interact with drivers. We challenge the system to perform in typical, everyday driving interactions with humans. Our results provide a baseline performance of 94.56% accuracy and 85.91% F1 score, promising statistics that surpass human performance and motivate the need for further research into human-AV interaction.
{"title":"Gesture Learning For Self-Driving Cars","authors":"Ethan Shaotran, Jonathan J. Cruz, V. Reddi","doi":"10.1109/ICAS49788.2021.9551186","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551186","url":null,"abstract":"Human-computer interaction (HCI) is crucial for safety as autonomous vehicles (AVs) become commonplace. Yet, little effort has been put toward ensuring that AVs understand human communications on the road. In this paper, we present Gesture Learning for Advanced Driver Assistance Systems (GLADAS), a deep learning-based self-driving car hand gesture recognition system developed and evaluated using virtual simulation. We focus on gestures as they are a natural and common way for pedestrians to interact with drivers. We challenge the system to perform in typical, everyday driving interactions with humans. Our results provide a baseline performance of 94.56% accuracy and 85.91% F1 score, promising statistics that surpass human performance and motivate the need for further research into human-AV interaction.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126683532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551152
Sheida Nozari, L. Marcenaro, David Martín, C. Regazzoni
This paper proposes an adaptive method to enable imitation learning from expert demonstrations in a multi-agent context. The proposed system employs the inverse reinforcement learning method to a coupled Dynamic Bayesian Network to facilitate dynamic learning in an interactive system. This method studies the interaction at both discrete and continuous levels by identifying inter-relationships between the objects to facilitate the prediction of an expert agent. We evaluate the learning procedure in the scene of learner agent based on probabilistic reward function. Our goal is to estimate policies that predict matched trajectories with the observed one by minimizing the Kullback-Leiber divergence. The reward policies provide a probabilistic dynamic structure to minimise the abnormalities.
{"title":"Observational Learning: Imitation Through an Adaptive Probabilistic Approach","authors":"Sheida Nozari, L. Marcenaro, David Martín, C. Regazzoni","doi":"10.1109/ICAS49788.2021.9551152","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551152","url":null,"abstract":"This paper proposes an adaptive method to enable imitation learning from expert demonstrations in a multi-agent context. The proposed system employs the inverse reinforcement learning method to a coupled Dynamic Bayesian Network to facilitate dynamic learning in an interactive system. This method studies the interaction at both discrete and continuous levels by identifying inter-relationships between the objects to facilitate the prediction of an expert agent. We evaluate the learning procedure in the scene of learner agent based on probabilistic reward function. Our goal is to estimate policies that predict matched trajectories with the observed one by minimizing the Kullback-Leiber divergence. The reward policies provide a probabilistic dynamic structure to minimise the abnormalities.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114553170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551117
Sotirios Papadopoulos, Ioannis Mademlis, I. Pitas
Semantic image segmentation is an important functionality in various applications, such as robotic vision for autonomous cars, drones, etc. Modern Convolutional Neural Networks (CNNs) process input RGB images and predict per-pixel semantic classes. Depth maps have been successfully utilized to increase accuracy over RGB-only input. They can be used as an additional input channel complementing the RGB image, or they may be estimated by an extra neural branch under a multitask training setting. Contrary to these approaches, in this paper we explore a novel regularizer that penalizes differences between semantic and self-supervised depth predictions on presumed object boundaries during CNN training. The proposed method does not resort to multitask training (which may require a more complex CNN backbone to avoid underfitting), does not rely on RGB-D or stereoscopic 3D training data and does not require known or estimated depth maps during inference. Quantitative evaluation on a public scene parsing video dataset for autonomous driving indicates enhanced semantic segmentation accuracy with zero inference runtime overhead.
{"title":"Semantic Image Segmentation Guided By Scene Geometry","authors":"Sotirios Papadopoulos, Ioannis Mademlis, I. Pitas","doi":"10.1109/ICAS49788.2021.9551117","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551117","url":null,"abstract":"Semantic image segmentation is an important functionality in various applications, such as robotic vision for autonomous cars, drones, etc. Modern Convolutional Neural Networks (CNNs) process input RGB images and predict per-pixel semantic classes. Depth maps have been successfully utilized to increase accuracy over RGB-only input. They can be used as an additional input channel complementing the RGB image, or they may be estimated by an extra neural branch under a multitask training setting. Contrary to these approaches, in this paper we explore a novel regularizer that penalizes differences between semantic and self-supervised depth predictions on presumed object boundaries during CNN training. The proposed method does not resort to multitask training (which may require a more complex CNN backbone to avoid underfitting), does not rely on RGB-D or stereoscopic 3D training data and does not require known or estimated depth maps during inference. Quantitative evaluation on a public scene parsing video dataset for autonomous driving indicates enhanced semantic segmentation accuracy with zero inference runtime overhead.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127934093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551131
Lida Ghaemi Dizaji, Yaoping Hu
In human-machine systems (HMS), trust placed by humans on machines is a complex concept and attracts increasingly research efforts. Herein, we reviewed recent studies on building and measuring trust in HMS. The review was based on one comprehensive model of trust – IMPACTS, which has 7 features of intention, measurability, performance, adaptivity, communication, transparency, and security. The review found that, in the past 5 years, HMS fulfill the features of intention, measurability, communication, and transparency. Most of the HMS consider the feature of performance. However, all of the HMS address rarely the feature of adaptivity and neglect the feature of security due to using stand-alone simulations. These findings indicate that future work considering the features of adaptivity and/or security is imperative to foster human trust in HMS.
{"title":"Building And Measuring Trust In Human-Machine Systems","authors":"Lida Ghaemi Dizaji, Yaoping Hu","doi":"10.1109/ICAS49788.2021.9551131","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551131","url":null,"abstract":"In human-machine systems (HMS), trust placed by humans on machines is a complex concept and attracts increasingly research efforts. Herein, we reviewed recent studies on building and measuring trust in HMS. The review was based on one comprehensive model of trust – IMPACTS, which has 7 features of intention, measurability, performance, adaptivity, communication, transparency, and security. The review found that, in the past 5 years, HMS fulfill the features of intention, measurability, communication, and transparency. Most of the HMS consider the feature of performance. However, all of the HMS address rarely the feature of adaptivity and neglect the feature of security due to using stand-alone simulations. These findings indicate that future work considering the features of adaptivity and/or security is imperative to foster human trust in HMS.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117337235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551121
Stavros Vakalis, J. Nanzer
Active incoherent millimeter-wave (AIM) imaging is a new technique that combines aspects of passive millimeter-wave imaging and noise radar to obtain high-speed imagery. Using an interferometric receiving array combined with small set of uncorrelated noise transmitters, measurements of the Fourier transform domain of the scene can be rapidly obtained, and scene images can be generated quickly via two-dimensional inverse Fourier transform. Previously, AIM imaging provided two-dimensional reconstructions of the scene. In this work we explore the use of active millimeter-wave imaging for automotive sensing by investigating array feasible layouts for automobiles, and a new technique to impart range estimation to obtain three-dimensional imaging information.
{"title":"Towards Three-Dimensional Active Incoherent Millimeter-Wave Imaging","authors":"Stavros Vakalis, J. Nanzer","doi":"10.1109/ICAS49788.2021.9551121","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551121","url":null,"abstract":"Active incoherent millimeter-wave (AIM) imaging is a new technique that combines aspects of passive millimeter-wave imaging and noise radar to obtain high-speed imagery. Using an interferometric receiving array combined with small set of uncorrelated noise transmitters, measurements of the Fourier transform domain of the scene can be rapidly obtained, and scene images can be generated quickly via two-dimensional inverse Fourier transform. Previously, AIM imaging provided two-dimensional reconstructions of the scene. In this work we explore the use of active millimeter-wave imaging for automotive sensing by investigating array feasible layouts for automobiles, and a new technique to impart range estimation to obtain three-dimensional imaging information.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133703400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551126
Christos Papaioannidis, I. Pitas
Object pose estimation remains an open and important task for autonomous systems, allowing them to perceive and interact with the surrounding environment. To this end, this paper proposes a 3D object pose estimation method that is suitable for execution on embedded systems. Specifically, a novel multi-task objective function is proposed, in order to train a Convolutional Neural Network (CNN) to extract pose-related features from RGB images, which are subsequently utilized in a Nearest-Neighbor (NN) search-based post-processing step to obtain the final 3D object poses. By utilizing a symmetry-aware term and unit quaternions in the proposed objective function, our method yielded more robust and discriminative features, thus, increasing 3D object pose estimation accuracy when compared to state-of-the-art. In addition, the employed feature extraction network utilizes a lightweight CNN architecture, allowing execution on hardware with limited computational capabilities. Finally, we demonstrate that the proposed method is also able to successfully generalize to previously unseen objects, without the need for extra training.
{"title":"Learning Robust Features for 3D Object Pose Estimation","authors":"Christos Papaioannidis, I. Pitas","doi":"10.1109/ICAS49788.2021.9551126","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551126","url":null,"abstract":"Object pose estimation remains an open and important task for autonomous systems, allowing them to perceive and interact with the surrounding environment. To this end, this paper proposes a 3D object pose estimation method that is suitable for execution on embedded systems. Specifically, a novel multi-task objective function is proposed, in order to train a Convolutional Neural Network (CNN) to extract pose-related features from RGB images, which are subsequently utilized in a Nearest-Neighbor (NN) search-based post-processing step to obtain the final 3D object poses. By utilizing a symmetry-aware term and unit quaternions in the proposed objective function, our method yielded more robust and discriminative features, thus, increasing 3D object pose estimation accuracy when compared to state-of-the-art. In addition, the employed feature extraction network utilizes a lightweight CNN architecture, allowing execution on hardware with limited computational capabilities. Finally, we demonstrate that the proposed method is also able to successfully generalize to previously unseen objects, without the need for extra training.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114826011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551173
W. Akram, A. Casavola
Inspection of submarine cables and pipelines is nowadays more and more carried out by Autonomous Underwater Vehicles (AUVs) because of their low operative costs, much less than those pertaining to the traditional SHIP/ROV-based (Remotely Operated Vehicles) industrial practice, and for the improvements in their effectiveness due to technological and methodological progress in the field. In this paper, we discuss the design of a visual control scheme aimed at solving a pipeline tracking control problem. The presented scheme consists of autonomously generating a reference path of an underwater pipeline deployed on the seabed from the images taken by a camera mounted on the AUV in order to allow the vehicle to move parallel to the longitudinal axis of the pipeline so as to inspect its status. The robustness of the scheme is also shown by adding external disturbances to the closed-loop control systems. We present a comparative simulation study under Robot Operating System (ROS) to find out suitable solutions for the underwater pipeline tracking problem.
{"title":"A Visual Control Scheme for AUV Underwater Pipeline Tracking","authors":"W. Akram, A. Casavola","doi":"10.1109/ICAS49788.2021.9551173","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551173","url":null,"abstract":"Inspection of submarine cables and pipelines is nowadays more and more carried out by Autonomous Underwater Vehicles (AUVs) because of their low operative costs, much less than those pertaining to the traditional SHIP/ROV-based (Remotely Operated Vehicles) industrial practice, and for the improvements in their effectiveness due to technological and methodological progress in the field. In this paper, we discuss the design of a visual control scheme aimed at solving a pipeline tracking control problem. The presented scheme consists of autonomously generating a reference path of an underwater pipeline deployed on the seabed from the images taken by a camera mounted on the AUV in order to allow the vehicle to move parallel to the longitudinal axis of the pipeline so as to inspect its status. The robustness of the scheme is also shown by adding external disturbances to the closed-loop control systems. We present a comparative simulation study under Robot Operating System (ROS) to find out suitable solutions for the underwater pipeline tracking problem.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132889280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551191
Yingxu Wang, K. Plataniotis, Arash Mohammadi, L. Marcenaro, A. Asif, Ming Hou, Henry Leung, Marina L. Gavrilova
Autonomous systems are advanced intelligent systems and general AI technologies triggered by the transdisciplinary development in intelligence science, system science, brain science, cognitive science, robotics, computational intelligence, and intelligent mathematics. AS are driven by the increasing demands in the modern industries of cognitive computers, deep machine learning, robotics, brain-inspired systems, self-driving cars, internet of things, and intelligent appliances. This paper presents a perspective on the framework of autonomous systems and their theoretical foundations. A wide range of application paradigms of autonomous systems are explored.
{"title":"Perspectives on the Emerging Field of Autonomous Systems and its Theoretical Foundations","authors":"Yingxu Wang, K. Plataniotis, Arash Mohammadi, L. Marcenaro, A. Asif, Ming Hou, Henry Leung, Marina L. Gavrilova","doi":"10.1109/ICAS49788.2021.9551191","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551191","url":null,"abstract":"Autonomous systems are advanced intelligent systems and general AI technologies triggered by the transdisciplinary development in intelligence science, system science, brain science, cognitive science, robotics, computational intelligence, and intelligent mathematics. AS are driven by the increasing demands in the modern industries of cognitive computers, deep machine learning, robotics, brain-inspired systems, self-driving cars, internet of things, and intelligent appliances. This paper presents a perspective on the framework of autonomous systems and their theoretical foundations. A wide range of application paradigms of autonomous systems are explored.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130859598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551112
Michael A. Kerr, D. Nasrallah, Tsz-Ho Kwok
The forestry industry all over the world is seeing the need for modernization of its machines toward autonomy. In this paper, we focus on a wheel loader that should operate autonomously in the yard of a pulp-and-paper mill, scooping wood chips from a pile of wood and dropping them into a hopper, which is linked to a conveyor that carries them inside the mill. The modelling of the wheel loader is elaborated first, while taking into account that it is composed of two systems: (i) the vehicle and (ii) the arm carrying the bucket. Notice that the former pertains to the category of articulated vehicles that steer using a different mechanism than the conventional Ackermann steering used in car-like vehicles. As for the latter, it is a 2DOF serial manipulator. The navigation is considered then. Finally, simulation results of the kinematics model are shown in Matlab/Simulink first, then dynamics and 3D animation are added using ROS2/Gazebo. Notice that this work is a first step toward the development of the digital twin of the wheel loader. Later, it will be used as the virtual platform for the validation of the autonomous wheel loader.
{"title":"First Steps Toward The Development Of Virtual Platform For Validation Of Autonomous Wheel Loader At Pulp-And-Paper Mill: Modelling, Control And Real-Time Simulation","authors":"Michael A. Kerr, D. Nasrallah, Tsz-Ho Kwok","doi":"10.1109/ICAS49788.2021.9551112","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551112","url":null,"abstract":"The forestry industry all over the world is seeing the need for modernization of its machines toward autonomy. In this paper, we focus on a wheel loader that should operate autonomously in the yard of a pulp-and-paper mill, scooping wood chips from a pile of wood and dropping them into a hopper, which is linked to a conveyor that carries them inside the mill. The modelling of the wheel loader is elaborated first, while taking into account that it is composed of two systems: (i) the vehicle and (ii) the arm carrying the bucket. Notice that the former pertains to the category of articulated vehicles that steer using a different mechanism than the conventional Ackermann steering used in car-like vehicles. As for the latter, it is a 2DOF serial manipulator. The navigation is considered then. Finally, simulation results of the kinematics model are shown in Matlab/Simulink first, then dynamics and 3D animation are added using ROS2/Gazebo. Notice that this work is a first step toward the development of the digital twin of the wheel loader. Later, it will be used as the virtual platform for the validation of the autonomous wheel loader.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131350922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}