Multi-robot task allocation (MRTA) problems require that robots make complex choices based on their understanding of a dynamic and uncertain environment. As a distributed computing system, the Multi-Robot System (MRS) must handle and distribute processing tasks (MRpTA). Each robot must contribute to the overall efficiency of the system based solely on a limited knowledge of its environment. Market-based methods are a natural candidate to deal processing tasks over a MRS but recent and numerous developments in reinforcement learning and especially Deep Q-Networks (DQN) provide new opportunities to solve the problem. In this paper we propose a new DQN-based method so that robots can learn directly from experience, and compare it with Market-based approaches as well with centralized and purely local solutions. Our study shows the relevancy of learning-based methods and also highlight research challenges to solve the processing load-balancing problem in MRS.
{"title":"DQN as an alternative to Market-based approaches for Multi-Robot processing Task Allocation (MRpTA)","authors":"Paul Gautier, J. Laurent, J. Diguet","doi":"10.35708/rc1870-126266","DOIUrl":"https://doi.org/10.35708/rc1870-126266","url":null,"abstract":"Multi-robot task allocation (MRTA) problems require that robots make complex choices based on their understanding of a dynamic and uncertain environment. As a distributed computing system, the Multi-Robot System (MRS) must handle and distribute processing tasks (MRpTA). Each robot must contribute to the overall efficiency of the system based solely on a limited knowledge of its environment. Market-based methods are a natural candidate to deal processing tasks over a MRS but recent and numerous developments in reinforcement learning and especially Deep Q-Networks (DQN) provide new opportunities to solve the problem. In this paper we propose a new DQN-based method so that robots can learn directly from experience, and compare it with Market-based approaches as well with centralized and purely local solutions. \u0000Our study shows the relevancy of learning-based methods and also highlight research challenges to solve the processing load-balancing problem in MRS.","PeriodicalId":292418,"journal":{"name":"International Journal of Robotic Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123948523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents an approach for determining suitable camera view poses for inspection of surface tolerances based on visual tracking of the tool movements performed by a skilled worker. Automated surface inspection of a workpiece adjusted by manual operations depends on manual programming of the inspecting robot, or a timeconsuming exhaustive search over the entire surface. The proposed approach is based on the assumption that the tool movements of the skilled worker coincide with the most relevant regions of the underlying surface of the workpiece, namely the parts where a manual process has been performed. The affected region is detected with a visual tracking system, which measures the motion of the tool using a low-cost RGBD-camera, a particle filter, and a CAD model of the tool. The main contribution is a scheme for selecting relevant camera view poses for inspecting the affected region using a robot equipped with a high-accuracy RGBDcamera. A principal component analysis of the tracked tool paths allows for evaluating the view poses by the Hotelling’s T-squared distribution test in order to sort and select suitable camera view poses. The approach is implemented and tested for the case where a large ship propeller blade cast in NiAl bronze is to be inspected by a robot after manual adjustments of its surface.
{"title":"View Planning for Robotic Inspection of Tolerances Through Visual Tracking of Manual Surface Finishing Operations","authors":"E. B. Njaastad","doi":"10.35708/rc1869-126261","DOIUrl":"https://doi.org/10.35708/rc1869-126261","url":null,"abstract":"This article presents an approach for determining suitable\u0000camera view poses for inspection of surface tolerances based on visual\u0000tracking of the tool movements performed by a skilled worker. Automated surface inspection of a workpiece adjusted by manual operations\u0000depends on manual programming of the inspecting robot, or a timeconsuming exhaustive search over the entire surface. The proposed approach is based on the assumption that the tool movements of the skilled\u0000worker coincide with the most relevant regions of the underlying surface\u0000of the workpiece, namely the parts where a manual process has been\u0000performed. The affected region is detected with a visual tracking system,\u0000which measures the motion of the tool using a low-cost RGBD-camera,\u0000a particle filter, and a CAD model of the tool. The main contribution\u0000is a scheme for selecting relevant camera view poses for inspecting the\u0000affected region using a robot equipped with a high-accuracy RGBDcamera. A principal component analysis of the tracked tool paths allows\u0000for evaluating the view poses by the Hotelling’s T-squared distribution\u0000test in order to sort and select suitable camera view poses. The approach\u0000is implemented and tested for the case where a large ship propeller blade\u0000cast in NiAl bronze is to be inspected by a robot after manual adjustments of its surface.","PeriodicalId":292418,"journal":{"name":"International Journal of Robotic Computing","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132489003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ten years after its rst release, the Robot Operating System (ROS) is arguably the most popular software framework used to pro- gram robots. It achieved such status despite its shortcomings compared to alternatives similarly centered on manual programming and, perhaps surprisingly, to model-driven engineering (MDE) approaches. Based on our experience, we identied possible ways to leverage the accessibility of ROS and its large software ecosystem, while providing quality assurance measures through selected MDE techniques. After describing our vision on how to combine MDE and manually written code, we present the rst technical contribution in this pursuit: a family of three metamodels to respectively model ROS nodes, communication interfaces, and sys- tems. Such metamodels can be used, through the accompanying Eclipse- based tooling made publicly available, to model ROS systems of arbitrary complexity and generate with correctness guarantees the software arti- facts for their composition and deployment. Furthermore, they account for specications on these aspects by the Object Management Group (OMG), in order to be amenable to hybrid systems coupling ROS and other frameworks. We also report on our experience with a large and complex corpus of ROS software including the shortcomings of standard ROS tools and of previous eorts on ROS modeling.
{"title":"Bootstrapping MDE Development from ROS Manual Code","authors":"N. Garcia","doi":"10.35708/rc1869-126256","DOIUrl":"https://doi.org/10.35708/rc1869-126256","url":null,"abstract":"Ten years after its rst release, the Robot Operating System \u0000(ROS) is arguably the most popular software framework used to pro-\u0000gram robots. It achieved such status despite its shortcomings compared \u0000to alternatives similarly centered on manual programming and, perhaps\u0000surprisingly, to model-driven engineering (MDE) approaches. Based on\u0000our experience, we identied possible ways to leverage the accessibility of\u0000ROS and its large software ecosystem, while providing quality assurance\u0000measures through selected MDE techniques. After describing our vision\u0000on how to combine MDE and manually written code, we present the\u0000rst technical contribution in this pursuit: a family of three metamodels \u0000to respectively model ROS nodes, communication interfaces, and sys-\u0000tems. Such metamodels can be used, through the accompanying Eclipse-\u0000based tooling made publicly available, to model ROS systems of arbitrary \u0000complexity and generate with correctness guarantees the software arti-\u0000facts for their composition and deployment. Furthermore, they account \u0000for specications on these aspects by the Object Management Group\u0000(OMG), in order to be amenable to hybrid systems coupling ROS and\u0000other frameworks. We also report on our experience with a large and\u0000complex corpus of ROS software including the shortcomings of standard\u0000ROS tools and of previous eorts on ROS modeling.","PeriodicalId":292418,"journal":{"name":"International Journal of Robotic Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125719321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the well known recursive Newton-Euler inverse dynamics algorithm for serial manipulators is reformulated into the context of the algebra of Dual Quaternions. Here we structure the forward kinematic description with screws and line displacements rather than the well established Denavit-Hartemberg parameters, thus accounting better efficiency, compactness and simpler dynamical models. We also present here the closed solution for the dqRNEA, and to do so we formalize some of the algebra for dual quaternion-vectors and dual quaternion-matrices. With a closed formulation of the dqRNEA we also create a dual quaternion based formulation for the computed torque control, a feedback linearization method for controlling a serial manipulator's torques in the joint space. Finally, a cost analysis of the main Dual Quaternions operations and of the Newton-Euler inverse dynamics algorithm as a whole is made and compared with other results in the literature.
{"title":"A Novel Dual Quaternion Based Cost Effcient Recursive Newton-Euler Inverse Dynamics Algorithm","authors":"Cristiana Miranda de Farias","doi":"10.35708/rc1868-126255","DOIUrl":"https://doi.org/10.35708/rc1868-126255","url":null,"abstract":"In this paper, the well known recursive Newton-Euler\u0000inverse dynamics algorithm for serial manipulators is reformulated into\u0000the context of the algebra of Dual Quaternions. Here we structure\u0000the forward kinematic description with screws and line displacements\u0000rather than the well established Denavit-Hartemberg parameters, thus\u0000accounting better efficiency, compactness and simpler dynamical models.\u0000We also present here the closed solution for the dqRNEA, and to do\u0000so we formalize some of the algebra for dual quaternion-vectors and\u0000dual quaternion-matrices. With a closed formulation of the dqRNEA\u0000we also create a dual quaternion based formulation for the computed\u0000torque control, a feedback linearization method for controlling a serial\u0000manipulator's torques in the joint space. Finally, a cost analysis of the\u0000main Dual Quaternions operations and of the Newton-Euler inverse\u0000dynamics algorithm as a whole is made and compared with other results\u0000in the literature.","PeriodicalId":292418,"journal":{"name":"International Journal of Robotic Computing","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131535386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robotic prosthetic hands are commonly controlled using electromyography (EMG) signals as a means of inferring user intention. However, relying on EMG signals alone, although provides very good results in lab settings, is not sufficiently robust to real-life conditions. For this reason, taking advantage of other contextual clues are proposed in previous works. In this work, we propose a method for intention inference based on particle filtering (PF) based on user hand's trajectory information. Our methodology, also provides an estimate of time-to-arrive, i.e. time left until reaching to the object, which is an essential variable in successful grasping of objects. The proposed probabilistic framework can incorporate available sources of information to improve the inference process. We also provide a data-driven method based on hidden Markov model (HMM) as a baseline for intention inference. HMM is widely used for human gesture classification. The algorithms were tested (and trained) with regards to 160 reaching trajectories collected from 10 subjects reaching to one of four objects at a time.
{"title":"Particle Filters vs Hidden Markov Models for Prosthetic Robot Hand Grasp Selection","authors":"M. Sharif","doi":"10.35708/rc1868-126253","DOIUrl":"https://doi.org/10.35708/rc1868-126253","url":null,"abstract":"Robotic prosthetic hands are commonly controlled using electromyography (EMG) signals as a means of inferring user intention. However, relying on EMG signals alone, although provides very good results in lab settings, is not sufficiently robust to real-life conditions. For this reason, taking advantage of other contextual clues are proposed in previous works. In this work, we propose a method for intention inference based on particle filtering (PF) based on user hand's trajectory information. Our methodology, also provides an estimate of time-to-arrive, i.e. time left until reaching to the object, which is an essential variable in successful grasping of objects. The proposed probabilistic framework can incorporate available sources of information to improve the inference process. We also provide a data-driven method based on hidden Markov model (HMM) as a baseline for intention inference. HMM is widely used for human gesture classification. The algorithms were tested (and trained) with regards to 160 reaching trajectories collected from 10 subjects reaching to one of four objects at a time.","PeriodicalId":292418,"journal":{"name":"International Journal of Robotic Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124317361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arabic poetry generation is a very challenging task since the linguistic structure of the Arabic language is considered a severe challenge for many researchers and developers in the Natural Language Processing (NLP) field. In this paper, we propose a poetry generation model with extended phonetic and semantic embeddings (Phonetic CNNsubword embeddings). We show that Phonetic CNNsubword embeddings have an effective contribution to the overall model performance compared to FastTextsubword embeddings. Our poetry generation model consists of a two-stage approach: (1.) generating the first verse which explicitly incorporates the theme related phrase, (2.) other verses generation with the proposed Hierarchy-Attention Sequence-to-Sequence model (HAS2S), which adequately capture word, phrase, and verse information between contexts. A comprehensive human evaluation confirms that the poems generated by our model outperform the base models in criteria such as Meaning, Coherence, Fluency, and Poeticness. Extensive quantitative experiments using Bi-Lingual Evaluation Understudy (BLEU) scores also demonstrate significant improvements over strong baselines.
{"title":"Arabic Poem Generation Incorporating Deep Learning and Phonetic CNNsubword Embedding Models","authors":"Sameerah Talafha, Banafsheh Rekabdar","doi":"10.35708/tai1868-126246","DOIUrl":"https://doi.org/10.35708/tai1868-126246","url":null,"abstract":"Arabic poetry generation is a very challenging task since the linguistic structure of the Arabic language is considered a severe challenge for many researchers and developers in the Natural Language Processing (NLP) field. In this paper, we propose a poetry generation model with extended phonetic and semantic embeddings (Phonetic CNNsubword embeddings). We show that Phonetic CNNsubword embeddings have an\u0000effective contribution to the overall model performance compared to FastTextsubword embeddings. Our poetry generation model consists of a two-stage approach: (1.) generating the first verse which explicitly incorporates the theme related phrase, (2.) other verses generation with the proposed Hierarchy-Attention Sequence-to-Sequence model (HAS2S), which adequately capture word, phrase, and verse information between contexts. A comprehensive human evaluation confirms that the poems generated by our model outperform the base models in criteria such as Meaning, Coherence, Fluency, and Poeticness. Extensive quantitative experiments using Bi-Lingual Evaluation Understudy (BLEU) scores also demonstrate significant improvements over strong baselines.","PeriodicalId":292418,"journal":{"name":"International Journal of Robotic Computing","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126875897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enabling the Continuous Evolution of Ontologies for Ontology-Based Data Management","authors":"André Pomp, Johannes Lipp, Tobias Meisen","doi":"10.35708/tai1868-126244","DOIUrl":"https://doi.org/10.35708/tai1868-126244","url":null,"abstract":"","PeriodicalId":292418,"journal":{"name":"International Journal of Robotic Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130284516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For table tennis robots, it is a significant challenge to understand the opponent's movements and return the ball accordingly with high performance. One has to cope with various ball speeds and spins resulting from different stroke types. In this paper, we propose a real-time 6D racket pose detection method and classify racket movements into five stroke categories with a neural network. By using two monocular cameras, we can extract the racket's contours and choose some special points as feature points in image coordinates. With the 3D geometrical information of a racket, a wide baseline stereo matching method is proposed to find the corresponding feature points and compute the 3D position and orientation of the racket by triangulation and plane fitting. Then, a Kalman filter is adopted to track the racket pose, and a multilayer perceptron (MLP) neural network is used to classify the pose movements. We conduct two experiments to evaluate the accuracy of racket pose detection and classification, in which the average error in position and orientation is around 7.8 mm and 7.2 by comparing with the ground truth from a KUKA robot. The classification accuracy is 98%, the same as the human pose estimation method with Convolutional Pose Machines (CPMs).
{"title":"Real-time 6D Racket Pose Estimation and Classification\u0000for Table Tennis Robots","authors":"Yapeng Gao","doi":"10.35708/rc1868-126249","DOIUrl":"https://doi.org/10.35708/rc1868-126249","url":null,"abstract":"For table tennis robots, it is a significant challenge to understand the opponent's movements and return the ball accordingly with\u0000high performance. One has to cope with various ball speeds and spins\u0000resulting from different stroke types. In this paper, we propose a real-time\u00006D racket pose detection method and classify racket movements into five\u0000stroke categories with a neural network. By using two monocular cameras, we can extract the racket's contours and choose some special points\u0000as feature points in image coordinates. With the 3D geometrical information of a racket, a wide baseline stereo matching method is proposed\u0000to find the corresponding feature points and compute the 3D position\u0000and orientation of the racket by triangulation and plane fitting. Then, a\u0000Kalman filter is adopted to track the racket pose, and a multilayer perceptron (MLP) neural network is used to classify the pose movements.\u0000We conduct two experiments to evaluate the accuracy of racket pose\u0000detection and classification, in which the average error in position and\u0000orientation is around 7.8 mm and 7.2 by comparing with the ground\u0000truth from a KUKA robot. The classification accuracy is 98%, the same\u0000as the human pose estimation method with Convolutional Pose Machines\u0000(CPMs).","PeriodicalId":292418,"journal":{"name":"International Journal of Robotic Computing","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125058553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}