Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864729
P. Szymak
Recently, a growing usage and consequently a developing level of autonomy of Autonomous Underwater Vehicles (AUVs) can be seen. These vehicles are power supplied and controlled from the sources located on their boards. One of the most often used sensors of the AUV is a video camera. This sensor in connection with the video images processing software can increase the level of autonomy of the AUV. One of the most popular applications using video camera is an image recognition, e.g. for the obstacle detection. One of the newest methods used for this application is the Deep Learning Neural Network (DLNN). The goal of the paper is to examine the genetic algorithm optimization method for the selection of training options for DLNN used for the underwater images recognition. In the research, the pretrained AlexNet DLNN and the Stochastic Gradient Descent with Momentum (SGDM) training method have been used. It is planned to implement examined DLNN on board of the Biomimetic Underwater Vehicles (BUV)
{"title":"Selection of Training Options for Deep Learning Neural Network Using Genetic Algorithm","authors":"P. Szymak","doi":"10.1109/MMAR.2019.8864729","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864729","url":null,"abstract":"Recently, a growing usage and consequently a developing level of autonomy of Autonomous Underwater Vehicles (AUVs) can be seen. These vehicles are power supplied and controlled from the sources located on their boards. One of the most often used sensors of the AUV is a video camera. This sensor in connection with the video images processing software can increase the level of autonomy of the AUV. One of the most popular applications using video camera is an image recognition, e.g. for the obstacle detection. One of the newest methods used for this application is the Deep Learning Neural Network (DLNN). The goal of the paper is to examine the genetic algorithm optimization method for the selection of training options for DLNN used for the underwater images recognition. In the research, the pretrained AlexNet DLNN and the Stochastic Gradient Descent with Momentum (SGDM) training method have been used. It is planned to implement examined DLNN on board of the Biomimetic Underwater Vehicles (BUV)","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121329480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864707
T. Kaczorek, L. Sajewski
Standard and fractional positive stable continuous-time linear system with transfer matrices having only positive coefficients are analyzed. It is shown that if the positive system is asymptotically stable then its zeros are located in the open left-hand part of the complex plane. Some invariant properties of positive standard and fractional linear systems are discussed.
{"title":"Poles and zeros of standard and fractional positive stable linear systems","authors":"T. Kaczorek, L. Sajewski","doi":"10.1109/MMAR.2019.8864707","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864707","url":null,"abstract":"Standard and fractional positive stable continuous-time linear system with transfer matrices having only positive coefficients are analyzed. It is shown that if the positive system is asymptotically stable then its zeros are located in the open left-hand part of the complex plane. Some invariant properties of positive standard and fractional linear systems are discussed.","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133543089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864612
P. Ostalczyk
In the paper we propose a novelty method of the variable-, fractional-order (VFO) multi - input multi -output (MIMO) discrete-time linear system description. Although the description is based on block matrices each in upper triangular form it has many common features with the matrix transfer functions and matrix-fraction descriptions of multivariable systems. Selected properties of the proposed matrices are given. The investigations are supported by an example.
{"title":"Variable-, Fractional-Order Linear MIMO System Matrix Description","authors":"P. Ostalczyk","doi":"10.1109/MMAR.2019.8864612","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864612","url":null,"abstract":"In the paper we propose a novelty method of the variable-, fractional-order (VFO) multi - input multi -output (MIMO) discrete-time linear system description. Although the description is based on block matrices each in upper triangular form it has many common features with the matrix transfer functions and matrix-fraction descriptions of multivariable systems. Selected properties of the proposed matrices are given. The investigations are supported by an example.","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131970872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864654
P. Herman, W. Adamski
This paper deals with the problem of trajectory tracking control for some vehicles (underwater vehicles, and indoor airships). The approach uses a velocity transformation which results from the inertia matrix decomposition. Next, a model-based non-adaptive nonlinear tracking controller in terms of the Generalized Velocity Components (GVC) is proposed. An important property of the algorithm is that the control gains are closely related to the dynamics of the vehicle (especially dynamical couplings). The general algorithm was given for a 6 DOF vehicle and tested using simulation. The results obtained for a full airship model have shown that the control scheme guarantees satisfactory performance.
{"title":"Model-Based Controller Using Quasi-Velocities for Some Vehicles","authors":"P. Herman, W. Adamski","doi":"10.1109/MMAR.2019.8864654","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864654","url":null,"abstract":"This paper deals with the problem of trajectory tracking control for some vehicles (underwater vehicles, and indoor airships). The approach uses a velocity transformation which results from the inertia matrix decomposition. Next, a model-based non-adaptive nonlinear tracking controller in terms of the Generalized Velocity Components (GVC) is proposed. An important property of the algorithm is that the control gains are closely related to the dynamics of the vehicle (especially dynamical couplings). The general algorithm was given for a 6 DOF vehicle and tested using simulation. The results obtained for a full airship model have shown that the control scheme guarantees satisfactory performance.","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132713768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864724
P. Latosiński, A. Bartoszewicz
In this paper we consider sliding mode control of multi-input discrete-time plants. For such plants, we propose the use of relative degree two sliding variables, which are known to improve robustness of single-input plants compared to their relative degree one equivalent. We investigate the design procedure of relative degree two sliding variables for multi-input systems. Then, we present two sliding mode control strategies obtained with the reaching law approach, using relative degree one and two sliding variables, respectively. We demonstrate that both reaching laws ensure desirable properties of the system sliding motion and that the method using relative degree two sliding variables ensures better robustness of the system than the one with relative degree one variables. We further show that the proposed reaching law based strategies enable independent tuning of individual inputs.
{"title":"Relative degree one and two sliding variables for multi-input discrete-time systems","authors":"P. Latosiński, A. Bartoszewicz","doi":"10.1109/MMAR.2019.8864724","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864724","url":null,"abstract":"In this paper we consider sliding mode control of multi-input discrete-time plants. For such plants, we propose the use of relative degree two sliding variables, which are known to improve robustness of single-input plants compared to their relative degree one equivalent. We investigate the design procedure of relative degree two sliding variables for multi-input systems. Then, we present two sliding mode control strategies obtained with the reaching law approach, using relative degree one and two sliding variables, respectively. We demonstrate that both reaching laws ensure desirable properties of the system sliding motion and that the method using relative degree two sliding variables ensures better robustness of the system than the one with relative degree one variables. We further show that the proposed reaching law based strategies enable independent tuning of individual inputs.","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"305 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116364939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864723
J. Sasiadek, M. Walker
This paper examines the effect on achievable depth accuracy of a stereo vision system as the baseline between the two camera sensors changes. This is critical for Unmanned Aerial Vehicle navigation or UAV aerial refueling, and for space debris clearance operations. The theory behind stereo image depth calculation is explained and then synthetic pixel data is manufactured in order to determine a 95% confidence interval on depth under two camera baseline conditions. A Gaussian pixel error is add to simulate Harris corner detection error. A disparity of the order of 10 pixels or less produces more than 1 cm difference between expected and actual depth for the stereo camera bases examined. For a 1-pixel disparity the difference is of the order of 50%. Future research is discussed.
{"title":"Achievable Stereo Vision Depth Accuracy with Changing Camera Baseline","authors":"J. Sasiadek, M. Walker","doi":"10.1109/MMAR.2019.8864723","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864723","url":null,"abstract":"This paper examines the effect on achievable depth accuracy of a stereo vision system as the baseline between the two camera sensors changes. This is critical for Unmanned Aerial Vehicle navigation or UAV aerial refueling, and for space debris clearance operations. The theory behind stereo image depth calculation is explained and then synthetic pixel data is manufactured in order to determine a 95% confidence interval on depth under two camera baseline conditions. A Gaussian pixel error is add to simulate Harris corner detection error. A disparity of the order of 10 pixels or less produces more than 1 cm difference between expected and actual depth for the stereo camera bases examined. For a 1-pixel disparity the difference is of the order of 50%. Future research is discussed.","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116675900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864711
M. Drwiega
The paper focuses on the feature matching based merging of 3D maps in a multi-robot system. The presented approach works globally what means that an initial transformation is not necessary for a proper integration of maps. The only one assumption is that the maps have a common part that can be used during a features detection, description and a matching process to compute a transformation between them. Then the found initial solution is corrected by a variation of an ICP based method. The maps are stored in the octree based representation (octomaps) but during transformation estimation a point cloud representation is used as well. In addition, the presented method was verified in various experiments, both in a simulation, with Turtlebots robots and with publicly available datasets. The solution can be applied to many robotic applications such as underwater robots, aerial robots or robots equipped with manipulators. However, so far it was mostly tested in groups of wheeled robots.
{"title":"Features Matching based Merging of 3D Maps in Multi-Robot Systems","authors":"M. Drwiega","doi":"10.1109/MMAR.2019.8864711","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864711","url":null,"abstract":"The paper focuses on the feature matching based merging of 3D maps in a multi-robot system. The presented approach works globally what means that an initial transformation is not necessary for a proper integration of maps. The only one assumption is that the maps have a common part that can be used during a features detection, description and a matching process to compute a transformation between them. Then the found initial solution is corrected by a variation of an ICP based method. The maps are stored in the octree based representation (octomaps) but during transformation estimation a point cloud representation is used as well. In addition, the presented method was verified in various experiments, both in a simulation, with Turtlebots robots and with publicly available datasets. The solution can be applied to many robotic applications such as underwater robots, aerial robots or robots equipped with manipulators. However, so far it was mostly tested in groups of wheeled robots.","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123460435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864686
Dang Ngoc Danh, H. Aschemann
In this paper, a Takagi-Sugeno (TS) state and disturbance observer is designed. The corresponding estimates are employed in a decentralized control approach for the normalized tilt angle of the hydraulic motor as well as the motor torque of a hydrostatic transmission. For the observer design, a nonlinear state-space model is written in quasi-linear form and extended by two integrator disturbance models. The observer gain matrix is derived by an exact interpolation of optimal designs at the vertices of a polytopic description using corresponding membership functions. Asymptotic stability of the observer error dynamics is guaranteed by solving a set of linear matrix inequalities (LMIs) resulting in a joint Lyapunov function. For an accurate trajectory tracking, feedback control is extended by feedforward control. The estimated disturbances from the TS observer are used for a subsequent disturbance rejection. The performance of the observer-based control structure is shown by simulations based on a validated model of a dedicated test rig which is available at the Chair of Mechatronics, University of Rostock.
{"title":"Design of a Takagi-Sugeno State and Disturbance Observer for a Torque-Controlled Hydrostatic Transmission","authors":"Dang Ngoc Danh, H. Aschemann","doi":"10.1109/MMAR.2019.8864686","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864686","url":null,"abstract":"In this paper, a Takagi-Sugeno (TS) state and disturbance observer is designed. The corresponding estimates are employed in a decentralized control approach for the normalized tilt angle of the hydraulic motor as well as the motor torque of a hydrostatic transmission. For the observer design, a nonlinear state-space model is written in quasi-linear form and extended by two integrator disturbance models. The observer gain matrix is derived by an exact interpolation of optimal designs at the vertices of a polytopic description using corresponding membership functions. Asymptotic stability of the observer error dynamics is guaranteed by solving a set of linear matrix inequalities (LMIs) resulting in a joint Lyapunov function. For an accurate trajectory tracking, feedback control is extended by feedforward control. The estimated disturbances from the TS observer are used for a subsequent disturbance rejection. The performance of the observer-based control structure is shown by simulations based on a validated model of a dedicated test rig which is available at the Chair of Mechatronics, University of Rostock.","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124731698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864639
D. Szabó, E. Szádeczky-Kardoss
This paper presents an offline path-planning method for robotic manipulators in static environment. The framework is based on the Transition-based Rapidly-exploring Random Tree (T-RRT) algorithm that requires a cost for each configurations. In this work, the calculation of this cost-function is based on the distance between the position and configurations that cause collisions. This function is evaluated with fuzzy function-approximation which lead to an efficient way to determine the cost all over the configuration space. The method is general, the only restriction is that the segments of the robot and the obstacles are modelled as convex polyhedrons. The approach is validated through simulations in MATLAB Simulink environment with Mitsubishi RV-2F-Q manipulator.
{"title":"Robotic manipulator path-planning: Cost-function approximation with fuzzy inference system","authors":"D. Szabó, E. Szádeczky-Kardoss","doi":"10.1109/MMAR.2019.8864639","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864639","url":null,"abstract":"This paper presents an offline path-planning method for robotic manipulators in static environment. The framework is based on the Transition-based Rapidly-exploring Random Tree (T-RRT) algorithm that requires a cost for each configurations. In this work, the calculation of this cost-function is based on the distance between the position and configurations that cause collisions. This function is evaluated with fuzzy function-approximation which lead to an efficient way to determine the cost all over the configuration space. The method is general, the only restriction is that the segments of the robot and the obstacles are modelled as convex polyhedrons. The approach is validated through simulations in MATLAB Simulink environment with Mitsubishi RV-2F-Q manipulator.","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124529502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/MMAR.2019.8864642
Daniel Dworak, Filip Ciepiela, Jakub Derbisz, I. Izzat, M. Komorkiewicz, M. Wójcik
Training deep neural network algorithms for LiDAR based object detection for autonomous cars requires huge amount of labeled data. Both data collection and labeling requires a lot of effort, money and time. Therefore, the use of simulation software for virtual data generation environments is gaining wide interest from both researchers and engineers. The big question remains how well artificially generated data resembles the data gathered by real sensors and how the differences affects the final algorithms performance. The article is trying to make a quantitative answer to the above question. Selected state-of-the-art algorithms for LiDAR point cloud object detection were trained on both real and artificially generated data sets. Their performance on different test sets were evaluated. The main focus was to determinate how well artificially trained networks perform on real data and if combined train sets can achieve better results overall.
{"title":"Performance of LiDAR object detection deep learning architectures based on artificially generated point cloud data from CARLA simulator","authors":"Daniel Dworak, Filip Ciepiela, Jakub Derbisz, I. Izzat, M. Komorkiewicz, M. Wójcik","doi":"10.1109/MMAR.2019.8864642","DOIUrl":"https://doi.org/10.1109/MMAR.2019.8864642","url":null,"abstract":"Training deep neural network algorithms for LiDAR based object detection for autonomous cars requires huge amount of labeled data. Both data collection and labeling requires a lot of effort, money and time. Therefore, the use of simulation software for virtual data generation environments is gaining wide interest from both researchers and engineers. The big question remains how well artificially generated data resembles the data gathered by real sensors and how the differences affects the final algorithms performance. The article is trying to make a quantitative answer to the above question. Selected state-of-the-art algorithms for LiDAR point cloud object detection were trained on both real and artificially generated data sets. Their performance on different test sets were evaluated. The main focus was to determinate how well artificially trained networks perform on real data and if combined train sets can achieve better results overall.","PeriodicalId":392498,"journal":{"name":"2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124006878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}