Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087141
David Gauthier, P. Freedman, G. Carayannis, A. Malowany
Robot tasks have come to demand flexible sensory-based "intelligent" behavior. The single robot has given way to multiple robots cooperating in three-dimensional dynamic environments which allow them to accomplish complex and intricate tasks. Because of the complex nature of the tasks performed, centralized control is no longer practical; workcells have become centers of distributed computing. This motivates the need for an interprocess communication (IPC) facility which would integrate the individual elements both within and between workcells. A survey of IPC is presented in the context of distributed robotics. To make the survey more meaningful, it is introduced by some remarks about general approaches to communication within a distributed computing environment. A discussion of the main IPC design issues for distributed robotics is included.
{"title":"Interprocess communication for distributed robotics","authors":"David Gauthier, P. Freedman, G. Carayannis, A. Malowany","doi":"10.1109/JRA.1987.1087141","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087141","url":null,"abstract":"Robot tasks have come to demand flexible sensory-based \"intelligent\" behavior. The single robot has given way to multiple robots cooperating in three-dimensional dynamic environments which allow them to accomplish complex and intricate tasks. Because of the complex nature of the tasks performed, centralized control is no longer practical; workcells have become centers of distributed computing. This motivates the need for an interprocess communication (IPC) facility which would integrate the individual elements both within and between workcells. A survey of IPC is presented in the context of distributed robotics. To make the survey more meaningful, it is introduced by some remarks about general approaches to communication within a distributed computing environment. A discussion of the main IPC design issues for distributed robotics is included.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131845111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087149
T. Okada, T. Sanemori
A three-wheeled vehicle for in-pipe monitoring tasks is described. The vehicle is basically composed of two hinged arms building what is hereafter referenced as a scissor structure. The arms are joined at one end. Another joint connects the middle points of the lower parts, called links. A wheel located at the pivot of the scissor structure is driven by an electric dc motor. Two other wheels located at the ends of the links (sphere bearings) are not actuated and can roll freely in any direction. Since the force of the arm mechanism pushing against the pipe wall is generated mechanically by an extension spring combined with levers, the vehicle rests in the pipe by pressing its wheels in the direction of maximum pipe cross section. The vehicle moves automatically by action of the driving wheel in the direction in which the pipe extends. The relation between the stretch force and the configuration of the vehicle is analyzed to optimize the lengths of the levers and the direction in which the levers should be attached for force generation. The geometric and kinematic conditions of the vehicle are investigated to make it move stably. Results of a simulation of locomotion proved the capability of self-adjustment of the vehicle to different pipe shapes and sizes. The resulting vehicle mechanism, called MOGRER, was built to carry a CCD video camera attached to its body for in-pipe monitoring tasks. Experimental results show that MOGRER can monitor while moving in an inclined pipe with a heavy angle, overcoming the effects of gravity and changes in pipe size and shape.
{"title":"MOGRER: A vehicle study and realization for in-pipe inspection tasks","authors":"T. Okada, T. Sanemori","doi":"10.1109/JRA.1987.1087149","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087149","url":null,"abstract":"A three-wheeled vehicle for in-pipe monitoring tasks is described. The vehicle is basically composed of two hinged arms building what is hereafter referenced as a scissor structure. The arms are joined at one end. Another joint connects the middle points of the lower parts, called links. A wheel located at the pivot of the scissor structure is driven by an electric dc motor. Two other wheels located at the ends of the links (sphere bearings) are not actuated and can roll freely in any direction. Since the force of the arm mechanism pushing against the pipe wall is generated mechanically by an extension spring combined with levers, the vehicle rests in the pipe by pressing its wheels in the direction of maximum pipe cross section. The vehicle moves automatically by action of the driving wheel in the direction in which the pipe extends. The relation between the stretch force and the configuration of the vehicle is analyzed to optimize the lengths of the levers and the direction in which the levers should be attached for force generation. The geometric and kinematic conditions of the vehicle are investigated to make it move stably. Results of a simulation of locomotion proved the capability of self-adjustment of the vehicle to different pipe shapes and sizes. The resulting vehicle mechanism, called MOGRER, was built to carry a CCD video camera attached to its body for in-pipe monitoring tasks. Experimental results show that MOGRER can monitor while moving in an inclined pipe with a heavy angle, overcoming the effects of gravity and changes in pipe size and shape.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125764693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087133
B. Oommen, S. Iyengar, N. Rao, R. Kashyap
The problem of navigating an autonomous mobile robot through unexplored terrain of obstacles is discussed. The case when the obstacles are "known" has been extensively studied in literature. Completely unexplored obstacle terrain is considered. In this case, the process of navigation involves both learning the information about the obstacle terrain and path planning. An algorithm is presented to navigate a robot in an unexplored terrain that is arbitrarily populated with disjoint convex polygonal obstacles in the plane. The navigation process is constituted by a number of traversals; each traversal is from an arbitrary source point to an arbitrary destination point. The proposed algorithm is proven to yield a convergent solution to each path of traversal. Initially, the terrain is explored using a rather primitive sensor, and the paths of traversal made may be suboptimal. The visibility graph that models the obstacle terrain is incrementally constructed by integrating the information about the paths traversed so far. At any stage of learning, the partially learned terrain model is represented as a learned visibility graph, and it is updated after each traversal. It is proven that the learned visibility graph converges to the visibility graph with probability one when the source and destination points are chosen randomly. Ultimately, the availability of the complete visibility graph enables the robot to plan globally optimal paths and also obviates the further usage of sensors.
{"title":"Robot navigation in unknown terrains using learned visibility graphs. Part I: The disjoint convex obstacle case","authors":"B. Oommen, S. Iyengar, N. Rao, R. Kashyap","doi":"10.1109/JRA.1987.1087133","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087133","url":null,"abstract":"The problem of navigating an autonomous mobile robot through unexplored terrain of obstacles is discussed. The case when the obstacles are \"known\" has been extensively studied in literature. Completely unexplored obstacle terrain is considered. In this case, the process of navigation involves both learning the information about the obstacle terrain and path planning. An algorithm is presented to navigate a robot in an unexplored terrain that is arbitrarily populated with disjoint convex polygonal obstacles in the plane. The navigation process is constituted by a number of traversals; each traversal is from an arbitrary source point to an arbitrary destination point. The proposed algorithm is proven to yield a convergent solution to each path of traversal. Initially, the terrain is explored using a rather primitive sensor, and the paths of traversal made may be suboptimal. The visibility graph that models the obstacle terrain is incrementally constructed by integrating the information about the paths traversed so far. At any stage of learning, the partially learned terrain model is represented as a learned visibility graph, and it is updated after each traversal. It is proven that the learned visibility graph converges to the visibility graph with probability one when the source and destination points are chosen randomly. Ultimately, the availability of the complete visibility graph enables the robot to plan globally optimal paths and also obviates the further usage of sensors.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129878639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087145
W. Khalil, Jean-François Kleinfinger
An efficient method for the calculation of the inverse dynamic models of tree structure robots is presented. The given method reduces significantly the computational burden such that the inverse dynamics can be computed in real time at servo rate. The method leads almost directly to models with a minimum number of arithmetic operations. The method is based on a Newton-Euler formulation that is linear in the inertial parameters, on an iterative symbolic procedure, and on condensating the inertial parameters by regrouping and eliminating some of them. The description of the robot is carried out by a new notation inspired from Denavit and Hartenberg notation. A Fortran program has been developed to generate automatically the dynamic models of tree structure robots.
{"title":"Minimum operations and minimum parameters of the dynamic models of tree structure robots","authors":"W. Khalil, Jean-François Kleinfinger","doi":"10.1109/JRA.1987.1087145","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087145","url":null,"abstract":"An efficient method for the calculation of the inverse dynamic models of tree structure robots is presented. The given method reduces significantly the computational burden such that the inverse dynamics can be computed in real time at servo rate. The method leads almost directly to models with a minimum number of arithmetic operations. The method is based on a Newton-Euler formulation that is linear in the inertial parameters, on an iterative symbolic procedure, and on condensating the inertial parameters by regrouping and eliminating some of them. The description of the robot is carried out by a new notation inspired from Denavit and Hartenberg notation. A Fortran program has been developed to generate automatically the dynamic models of tree structure robots.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129195841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087137
C. Klein, Tae-Sang Chung
Force is often used in the control of the legs of a walking machine to allow a vehicle to adapt to terrain irregularity. However, interactions in force among the legs have the capability of causing control system instabilities if not properly treated. Different criteria for allocating forces to the legs of a walking machine are considered, properties of force-induced instability modes are studied, and the plan of hybrid control allocated by legs as a means of avoiding these force interaction modes without requiring an excessively high control frequency is introduced.
{"title":"Force interaction and allocation for the legs of a walking vehicle","authors":"C. Klein, Tae-Sang Chung","doi":"10.1109/JRA.1987.1087137","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087137","url":null,"abstract":"Force is often used in the control of the legs of a walking machine to allow a vehicle to adapt to terrain irregularity. However, interactions in force among the legs have the capability of causing control system instabilities if not properly treated. Different criteria for allocating forces to the legs of a walking machine are considered, properties of force-induced instability modes are studied, and the plan of hybrid control allocated by legs as a means of avoiding these force interaction modes without requiring an excessively high control frequency is introduced.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115212825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087146
T. Fan, G. Medioni, R. Nevatia
A method to segment and describe visible surfaces of three-dimensional (3-D) objects is presented by first segmenting the surfaces into simple surface patches and then using these patches and their boundaries to describe the 3-D surfaces. First, distinguished points are extracted which will comprise the edges of segmented surface patches, using the zero-crossings and extrema of curvature along a given direction. Two different methods are used: if the sensor provides relatively noise-free range images, the principal curvatures are computed at only one resolution, otherwise, a multiple scale approach is used and curvature is computed in four directions 45° apart to facilitate interscale tracking. These points are then grouped into curves and these curves are classified into different classes which correspond to significant physical properties such as jump boundaries, folds, and ridge lines (or smooth extrema). Then jump boundaries and folds are used to segment the surfaces into surface patches, and a simple surface is fitted to each patch to reconstruct the original objects. These descriptions not only make explicit most of the salient properties present in the original input, but are more suited to further processing, such as matching with a given model. The generality and robustness of this approach is illustrated on scene images with different available range sensors.
{"title":"Segmented descriptions of 3-D surfaces","authors":"T. Fan, G. Medioni, R. Nevatia","doi":"10.1109/JRA.1987.1087146","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087146","url":null,"abstract":"A method to segment and describe visible surfaces of three-dimensional (3-D) objects is presented by first segmenting the surfaces into simple surface patches and then using these patches and their boundaries to describe the 3-D surfaces. First, distinguished points are extracted which will comprise the edges of segmented surface patches, using the zero-crossings and extrema of curvature along a given direction. Two different methods are used: if the sensor provides relatively noise-free range images, the principal curvatures are computed at only one resolution, otherwise, a multiple scale approach is used and curvature is computed in four directions 45° apart to facilitate interscale tracking. These points are then grouped into curves and these curves are classified into different classes which correspond to significant physical properties such as jump boundaries, folds, and ridge lines (or smooth extrema). Then jump boundaries and folds are used to segment the surfaces into surface patches, and a simple surface is fitted to each patch to reconstruct the original objects. These descriptions not only make explicit most of the salient properties present in the original input, but are more suited to further processing, such as matching with a given model. The generality and robustness of this approach is illustrated on scene images with different available range sensors.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133333774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087138
You-Liang Gu, J. Luh
In robotics, to deal with coordinate transformation in three-dimensional (3D) Cartesian space, the homogeneous transformation is usually applied. It is defined in the four-dimensional space, and its matrix multiplication performs the simultaneous rotation and translation. The homogeneous transformation, however, is a point transformation. In contrast, a line transformation can also naturally be defined in 3D Cartesian space, in which the transformed element is a line in 3D space instead of a point. In robotic kinematics and dynamics, the velocity and acceleration vectors are often the direct targets of analysis. The line transformation will have advantages over the ordinary point transformation, since the combination of the linear and angular quantities can be represented by lines in 3D space. Since a line in 3D space is determined by four independent parameters, finding an appropriate type of "number representation" which combines two real variables is the first key prerequisite. The dual number is chosen for the line representation, and lemmas and theorems indicating relavent properties of the dual number, dual vector, and dual matrix are proposed. This is followed by the transformation and manipulation for the robotic applications. The presented procedure offers an algorithm which deals with the symbolic analysis for both rotation and translation. In particular, it can effectively be used for direct determination of Jacobian matrices and their derivatives. It is shown that the proposed procedure contributes a simplified approach to the formulation of the robotic kinematics, dynamics, and control system modeling.
{"title":"Dual-number transformation and its applications to robotics","authors":"You-Liang Gu, J. Luh","doi":"10.1109/JRA.1987.1087138","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087138","url":null,"abstract":"In robotics, to deal with coordinate transformation in three-dimensional (3D) Cartesian space, the homogeneous transformation is usually applied. It is defined in the four-dimensional space, and its matrix multiplication performs the simultaneous rotation and translation. The homogeneous transformation, however, is a point transformation. In contrast, a line transformation can also naturally be defined in 3D Cartesian space, in which the transformed element is a line in 3D space instead of a point. In robotic kinematics and dynamics, the velocity and acceleration vectors are often the direct targets of analysis. The line transformation will have advantages over the ordinary point transformation, since the combination of the linear and angular quantities can be represented by lines in 3D space. Since a line in 3D space is determined by four independent parameters, finding an appropriate type of \"number representation\" which combines two real variables is the first key prerequisite. The dual number is chosen for the line representation, and lemmas and theorems indicating relavent properties of the dual number, dual vector, and dual matrix are proposed. This is followed by the transformation and manipulation for the robotic applications. The presented procedure offers an algorithm which deals with the symbolic analysis for both rotation and translation. In particular, it can effectively be used for direct determination of Jacobian matrices and their derivatives. It is shown that the proposed procedure contributes a simplified approach to the formulation of the robotic kinematics, dynamics, and control system modeling.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124980192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087134
Chun-Erh Chen, C. S. Lee, C. McGillem
A graph-theoretic approach for determining an optimal task (or routing) assignment of p autonomous vehicles (AV's) among m workstations in a flexible manufacturing system which both minimizes the assignment completion time and balances the load among the AV's is presented. This task assignment problem is equivalent to an optimal routing assignmenl of destinating the m workstations to the p autonomous vehicles. A cost function is defined in terms of the job execution time and the traveling time performed by the AV's. Optimization of the objective function is based on the minimax of the job execution time and the minimization of max-min of the traveling time. This optimal task assignment problem is known to be NP-complete. Thus the problem is solved by a state-space search method-the A algorithm. The A algorithm is a classical minimum-cost graph search method. It is guaranteed to find an optimal solution if the evaluation function which utilizes the heuristic information about the problem for speeding up the search is properly defined. If potential collisions exist on the optimal routing assignment, then dynamic collision detection must be carried out during the state-space search to guarantee an optimal collision-free routing assignment. This collision avoidance can be taken care of by using an ordered collision matrix to adjust the arrival time of every AV arriving at the center of the "collision zone" if a potential collision is detected. Again, the A search strategy can be utilized to obtain an optimal collision-free routing assignment, and the optimal assignment obtained also achieves load balancing of the p AV's.
{"title":"Task assignment and load balancing of autonomous vehicles in a flexible manufacturing system","authors":"Chun-Erh Chen, C. S. Lee, C. McGillem","doi":"10.1109/JRA.1987.1087134","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087134","url":null,"abstract":"A graph-theoretic approach for determining an optimal task (or routing) assignment of p autonomous vehicles (AV's) among m workstations in a flexible manufacturing system which both minimizes the assignment completion time and balances the load among the AV's is presented. This task assignment problem is equivalent to an optimal routing assignmenl of destinating the m workstations to the p autonomous vehicles. A cost function is defined in terms of the job execution time and the traveling time performed by the AV's. Optimization of the objective function is based on the minimax of the job execution time and the minimization of max-min of the traveling time. This optimal task assignment problem is known to be NP-complete. Thus the problem is solved by a state-space search method-the A algorithm. The A algorithm is a classical minimum-cost graph search method. It is guaranteed to find an optimal solution if the evaluation function which utilizes the heuristic information about the problem for speeding up the search is properly defined. If potential collisions exist on the optimal routing assignment, then dynamic collision detection must be carried out during the state-space search to guarantee an optimal collision-free routing assignment. This collision avoidance can be taken care of by using an ordered collision matrix to adjust the arrival time of every AV arriving at the center of the \"collision zone\" if a potential collision is detected. Again, the A search strategy can be utilized to obtain an optimal collision-free routing assignment, and the optimal assignment obtained also achieves load balancing of the p AV's.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132101433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087143
M. Kabuka, Á. Arenas
As mobile robots are taking on more and more of the tasks that were normally delegated to humans, they need to acquire higher degrees of autonomous operation, which calls for accurate and efficient position determination and/or verification. The critical geometric dimensions of a standard pattern are used here to locate the relative position of the mobile robot with respect to the pattern; by doing so, the method does not depend on values of any intrinsic camera parameters, except the focal length. In addition, this method has the advantages of simplicity and flexibility. This standard pattern is also provided with a unique identification code, using bar codes, that enables the system to find the absolute location of the pattern. These bar codes also assist in the scanning algorithms to locate the pattern in the environment. A thorough error analysis and experimental results obtained through software simulation are presented, as well as the current direction of our work.
{"title":"Position verification of a mobile robot using standard pattern","authors":"M. Kabuka, Á. Arenas","doi":"10.1109/JRA.1987.1087143","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087143","url":null,"abstract":"As mobile robots are taking on more and more of the tasks that were normally delegated to humans, they need to acquire higher degrees of autonomous operation, which calls for accurate and efficient position determination and/or verification. The critical geometric dimensions of a standard pattern are used here to locate the relative position of the mobile robot with respect to the pattern; by doing so, the method does not depend on values of any intrinsic camera parameters, except the focal length. In addition, this method has the advantages of simplicity and flexibility. This standard pattern is also provided with a unique identification code, using bar codes, that enables the system to find the absolute location of the pattern. These bar codes also assist in the scanning algorithms to locate the pattern in the environment. A thorough error analysis and experimental results obtained through software simulation are presented, as well as the current direction of our work.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121202929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1987-12-01DOI: 10.1109/JRA.1987.1087147
G. Rodríguez
The inverse and forward dynamics problems for multilink serial manipulators are solved by using recursive techniques from linear filtering and smoothing theory. The pivotal step is to cast the system dynamics and kinematics as a two-point boundary-value problem. Solution of this problem leads to filtering and smoothing techniques similar to the equations of Kalman filtering and Bryson-Frazier fixed time-interval smoothing. The solutions prescribe an inward filtering recursion to compute a sequence of constraint moments and forces followed by an outward recursion to determine a corresponding sequence of angular and linear accelerations. An inward recursion refers to a sequential technique that starts at the tip of the terminal link and proceeds inwardly through all of the links until it reaches the base. Similarly, an outward recursion starts at the base and propagates out toward the tip. The recursive solutions are O(N), in the sense that the number of required computations only grows linearly with the number of links. A technique is provided to compute the relative angular accelerations at all of the joints from the applied external joint moments (and vice versa). It also provides an approach to evaluate recursively the composite multilink system inertia matrix and its inverse. The main contribution is to establish the equivalence between the filtering and smoothing techniques arising in state estimation theory and the methods of recursive robot dynamics. The filtering and smoothing architecture is very easy to understand and implement. This provides for a better understanding of robot dynamics. While the focus is not on exploring computational efficiency, some initial results in that direction are obtained. This is done by comparing performance with other recursive methods for a planar chain example. The analytical foundation is laid for the potential use of filtering and smoothing techniques in robot dynamics and control.
{"title":"Kalman filtering, smoothing, and recursive robot arm forward and inverse dynamics","authors":"G. Rodríguez","doi":"10.1109/JRA.1987.1087147","DOIUrl":"https://doi.org/10.1109/JRA.1987.1087147","url":null,"abstract":"The inverse and forward dynamics problems for multilink serial manipulators are solved by using recursive techniques from linear filtering and smoothing theory. The pivotal step is to cast the system dynamics and kinematics as a two-point boundary-value problem. Solution of this problem leads to filtering and smoothing techniques similar to the equations of Kalman filtering and Bryson-Frazier fixed time-interval smoothing. The solutions prescribe an inward filtering recursion to compute a sequence of constraint moments and forces followed by an outward recursion to determine a corresponding sequence of angular and linear accelerations. An inward recursion refers to a sequential technique that starts at the tip of the terminal link and proceeds inwardly through all of the links until it reaches the base. Similarly, an outward recursion starts at the base and propagates out toward the tip. The recursive solutions are O(N), in the sense that the number of required computations only grows linearly with the number of links. A technique is provided to compute the relative angular accelerations at all of the joints from the applied external joint moments (and vice versa). It also provides an approach to evaluate recursively the composite multilink system inertia matrix and its inverse. The main contribution is to establish the equivalence between the filtering and smoothing techniques arising in state estimation theory and the methods of recursive robot dynamics. The filtering and smoothing architecture is very easy to understand and implement. This provides for a better understanding of robot dynamics. While the focus is not on exploring computational efficiency, some initial results in that direction are obtained. This is done by comparing performance with other recursive methods for a planar chain example. The analytical foundation is laid for the potential use of filtering and smoothing techniques in robot dynamics and control.","PeriodicalId":404512,"journal":{"name":"IEEE Journal on Robotics and Automation","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114410812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}