Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525772
J. Hancock, C. Thorpe
ELVIS (Eigenvectors for Land Vehicle Image System) is a road-following system designed to drive the CMU Navlabs. It is based on ALVINN, the neural network road-following system built by Dean Pomerleau at CMU. ELVIS is an attempt to more fully understand ALVINN and to determine whether it is possible to design a system that can rival ALVINN using the same input and output, but without using a neural network. Like ALVINN, ELVIS observes the road through a video camera and observes human steering response through encoders mounted on the steering column. After a few minutes of observing the human trainer, ELVIS can take control. ELVIS learns the eigenvectors of the image and steering training set via principal component analysis. These eigenvectors roughly correspond to the primary features of the image set and their correlations to steering. Road-following is then performed by projecting new images onto the previously calculated eigenspace. ELVIS architecture and experiments are discussed as well as implications for eigenvector-based systems and how they compare with neural network-based systems.
{"title":"ELVIS: Eigenvectors for Land Vehicle Image System","authors":"J. Hancock, C. Thorpe","doi":"10.1109/IROS.1995.525772","DOIUrl":"https://doi.org/10.1109/IROS.1995.525772","url":null,"abstract":"ELVIS (Eigenvectors for Land Vehicle Image System) is a road-following system designed to drive the CMU Navlabs. It is based on ALVINN, the neural network road-following system built by Dean Pomerleau at CMU. ELVIS is an attempt to more fully understand ALVINN and to determine whether it is possible to design a system that can rival ALVINN using the same input and output, but without using a neural network. Like ALVINN, ELVIS observes the road through a video camera and observes human steering response through encoders mounted on the steering column. After a few minutes of observing the human trainer, ELVIS can take control. ELVIS learns the eigenvectors of the image and steering training set via principal component analysis. These eigenvectors roughly correspond to the primary features of the image set and their correlations to steering. Road-following is then performed by projecting new images onto the previously calculated eigenspace. ELVIS architecture and experiments are discussed as well as implications for eigenvector-based systems and how they compare with neural network-based systems.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133692911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.526237
H. Zhuang, O. Masory, Jiahua Yan
This paper focuses on the accuracy enhancement of Stewart platforms through kinematic calibration. The calibration problem is formulated in terms of a measurement residual, which is the discrepancy between the measured leg length and the computed leg length. With this formulation, one is able to identify kinematic error parameters of the Stewart platform without the necessity of solving the forward kinematic problem, thus avoiding the numerical problems associated with the solution of the forward kinematic problem. The error parameters are basically the installation errors of the platform ball and U-joints as well as the leg length offsets. By this formulation, a concise differential error model with a well-structured identification Jacobian, which relates the pose measurement residual to the errors in the parameters of the platform, is derived. A measurement procedure that utilizes a single theodolite was devised to determine the poses of the platform. Experimental studies reveal that the proposed calibration method is effective in enhancing the accuracy performance of Stewart platforms.
{"title":"Kinematic calibration of a Stewart platform using pose measurements obtained by a single theodolite","authors":"H. Zhuang, O. Masory, Jiahua Yan","doi":"10.1109/IROS.1995.526237","DOIUrl":"https://doi.org/10.1109/IROS.1995.526237","url":null,"abstract":"This paper focuses on the accuracy enhancement of Stewart platforms through kinematic calibration. The calibration problem is formulated in terms of a measurement residual, which is the discrepancy between the measured leg length and the computed leg length. With this formulation, one is able to identify kinematic error parameters of the Stewart platform without the necessity of solving the forward kinematic problem, thus avoiding the numerical problems associated with the solution of the forward kinematic problem. The error parameters are basically the installation errors of the platform ball and U-joints as well as the leg length offsets. By this formulation, a concise differential error model with a well-structured identification Jacobian, which relates the pose measurement residual to the errors in the parameters of the platform, is derived. A measurement procedure that utilizes a single theodolite was devised to determine the poses of the platform. Experimental studies reveal that the proposed calibration method is effective in enhancing the accuracy performance of Stewart platforms.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130314426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.526173
Susan Hert, V. Lumelsky
We considered the problem of motion planning for a number of small, disc-like robots in a common planar workspace. Each robot is tethered to a point on the boundary of the workspace by a flexible cable of finite length. These cables may be pushed and bent by robots that come in contact with them but remain taut at all times. The robots are given a set of target points to which they must move. Upon arrival at these points, a new set of target points are given. Associated with each set of target points is a configuration of the cables that must be achieved when all robots are at these target points. The motion planning task addressed here is to produce relatively short paths for the robots from an initial (nontrivial) configuration of the cables to a configuration corresponding to the next set of target points. An O(n/sup 2/logn) algorithm is presented for achieving this task for n robots.
{"title":"Moving multiple tethered robots between arbitrary configurations","authors":"Susan Hert, V. Lumelsky","doi":"10.1109/IROS.1995.526173","DOIUrl":"https://doi.org/10.1109/IROS.1995.526173","url":null,"abstract":"We considered the problem of motion planning for a number of small, disc-like robots in a common planar workspace. Each robot is tethered to a point on the boundary of the workspace by a flexible cable of finite length. These cables may be pushed and bent by robots that come in contact with them but remain taut at all times. The robots are given a set of target points to which they must move. Upon arrival at these points, a new set of target points are given. Associated with each set of target points is a configuration of the cables that must be achieved when all robots are at these target points. The motion planning task addressed here is to produce relatively short paths for the robots from an initial (nontrivial) configuration of the cables to a configuration corresponding to the next set of target points. An O(n/sup 2/logn) algorithm is presented for achieving this task for n robots.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114590871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525888
Zhenyu Yu, B. Ghosh, N. Xi, T. Tarn
A multi-sensor based planning and control scheme for robotic manufacturing is presented in this paper. The proposed approach fuses sensory information from various sensors at different temporal and spatial scales in an event-based planning and control scheme. By combining the measurement of an encoder sensor, relative spatial information obtained from processing of visual measurement is mapped to the absolute task-space of the robot, delayed data obtained from a displacement based vision algorithm that represent absolute part position measurement is brought to up to date. A four-step approach to planning and control of a robotic manipulator is discussed. An event-driven tracking and control scheme that is based on multi-sensor information is given. The approach is illustrated by considering a manufacturing workcell where the manipulator is commanded to pick up a part on a disc conveyor under the guidance of computer vision.
{"title":"Multi-sensor based planning and control for robotic manufacturing systems","authors":"Zhenyu Yu, B. Ghosh, N. Xi, T. Tarn","doi":"10.1109/IROS.1995.525888","DOIUrl":"https://doi.org/10.1109/IROS.1995.525888","url":null,"abstract":"A multi-sensor based planning and control scheme for robotic manufacturing is presented in this paper. The proposed approach fuses sensory information from various sensors at different temporal and spatial scales in an event-based planning and control scheme. By combining the measurement of an encoder sensor, relative spatial information obtained from processing of visual measurement is mapped to the absolute task-space of the robot, delayed data obtained from a displacement based vision algorithm that represent absolute part position measurement is brought to up to date. A four-step approach to planning and control of a robotic manipulator is discussed. An event-driven tracking and control scheme that is based on multi-sensor information is given. The approach is illustrated by considering a manufacturing workcell where the manipulator is commanded to pick up a part on a disc conveyor under the guidance of computer vision.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"414 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116676780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525799
L. Parker
Previous research in cooperative robotics has investigated several possible ways of coordinating the actions of cooperative teams-from implicit cooperation through sensory feedback to explicit cooperation using the exchange of communicated messages. These various approaches differ in the extent to which robot team members are aware of, or recognize the actions of their teammates and the extent to which they use this information to effect their own actions. The research described in this paper investigates this issue of robot awareness of team member actions and its effect on cooperative team performance by examining the results of a series of experiments on teams of mobile robots performing a puck moving mission. In these experiments, the author varies the team size (and thus the level of redundancy in team member capabilities) and the level of awareness robots have of their teammates' current actions and evaluate the team's performance using two metrics: time and energy. The results indicate that the impact of action awareness on cooperative team performance is a function not only of team size and the metric of evaluation, but also on the degree to which the effects of actions can be sensed through the world, the relative amount of work that is available per robot, and the cost of replicated actions. Based on these empirical studies, the author discusses the impact of action recognition and robot awareness on cooperative team design.
{"title":"The effect of action recognition and robot awareness in cooperative robotic teams","authors":"L. Parker","doi":"10.1109/IROS.1995.525799","DOIUrl":"https://doi.org/10.1109/IROS.1995.525799","url":null,"abstract":"Previous research in cooperative robotics has investigated several possible ways of coordinating the actions of cooperative teams-from implicit cooperation through sensory feedback to explicit cooperation using the exchange of communicated messages. These various approaches differ in the extent to which robot team members are aware of, or recognize the actions of their teammates and the extent to which they use this information to effect their own actions. The research described in this paper investigates this issue of robot awareness of team member actions and its effect on cooperative team performance by examining the results of a series of experiments on teams of mobile robots performing a puck moving mission. In these experiments, the author varies the team size (and thus the level of redundancy in team member capabilities) and the level of awareness robots have of their teammates' current actions and evaluate the team's performance using two metrics: time and energy. The results indicate that the impact of action awareness on cooperative team performance is a function not only of team size and the metric of evaluation, but also on the degree to which the effects of actions can be sensed through the world, the relative amount of work that is available per robot, and the cost of replicated actions. Based on these empirical studies, the author discusses the impact of action recognition and robot awareness on cooperative team design.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116964181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.526148
M. Nakashima, K. Yano, Y. Maruyama, H. Yakabe
This paper describes the design concepts and development outline of the semi-automatic hot-line work robot system, "Phase II" and its human-robot interface, "MOS" developed by the authors' company. In order to realize a high level of automation on hot-line work, reduce the operators' work load, and increase work efficiency, the authors have adapted a semi-automatic operation method with dual-armed manipulators and multi-operation system or "MOS" in "Phase II". The former is realized through two kinds of controlled motions: sensor model-based controlled motion and master-slave controlled motion. The latter is the system which integrates real images, characters, diagrams, and voice. This paper includes experimental results of work which certify the effectiveness of the robot system, which uses sensor model-based control and master-slave control jointly and also "MOS". "Phase II" can be roughly classified into these components; vehicles, booms, robot portions (using the 7-axis dual-armed manipulators), cameras, automatic tool changers (ATC), automatic material changers (AMC), and "MOS". The system organization such as this, the authors are convinced, is one of the basic systems for overhead work systems with mobility.
{"title":"The hot line work robot system \"Phase II\" and its human-robot interface \"MOS\"","authors":"M. Nakashima, K. Yano, Y. Maruyama, H. Yakabe","doi":"10.1109/IROS.1995.526148","DOIUrl":"https://doi.org/10.1109/IROS.1995.526148","url":null,"abstract":"This paper describes the design concepts and development outline of the semi-automatic hot-line work robot system, \"Phase II\" and its human-robot interface, \"MOS\" developed by the authors' company. In order to realize a high level of automation on hot-line work, reduce the operators' work load, and increase work efficiency, the authors have adapted a semi-automatic operation method with dual-armed manipulators and multi-operation system or \"MOS\" in \"Phase II\". The former is realized through two kinds of controlled motions: sensor model-based controlled motion and master-slave controlled motion. The latter is the system which integrates real images, characters, diagrams, and voice. This paper includes experimental results of work which certify the effectiveness of the robot system, which uses sensor model-based control and master-slave control jointly and also \"MOS\". \"Phase II\" can be roughly classified into these components; vehicles, booms, robot portions (using the 7-axis dual-armed manipulators), cameras, automatic tool changers (ATC), automatic material changers (AMC), and \"MOS\". The system organization such as this, the authors are convinced, is one of the basic systems for overhead work systems with mobility.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117132048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.526143
N. Chen, Hong Zhang, R. Rink
A novel method to perform edge tracking using tactile sensors is presented in this paper. Using the tactile servo scheme, a robot manipulator is driven only by real-time tactile feedback from the array tactile sensors mounted directly on the robot end-effector. Compared with previous approaches, the control scheme presented in this paper is consistent and more efficient. Real-time edge tracking experiments are conducted using an experimental system consisting of a PUMA 260, a single rigid finger and a planar array tactile sensor. Experimental results show satisfactory control speed and accuracy for both straight and curved edge tracking. An example of active tactile sensing of a unknown object using edge tracking is also demonstrated.
{"title":"Edge tracking using tactile servo","authors":"N. Chen, Hong Zhang, R. Rink","doi":"10.1109/IROS.1995.526143","DOIUrl":"https://doi.org/10.1109/IROS.1995.526143","url":null,"abstract":"A novel method to perform edge tracking using tactile sensors is presented in this paper. Using the tactile servo scheme, a robot manipulator is driven only by real-time tactile feedback from the array tactile sensors mounted directly on the robot end-effector. Compared with previous approaches, the control scheme presented in this paper is consistent and more efficient. Real-time edge tracking experiments are conducted using an experimental system consisting of a PUMA 260, a single rigid finger and a planar array tactile sensor. Experimental results show satisfactory control speed and accuracy for both straight and curved edge tracking. An example of active tactile sensing of a unknown object using edge tracking is also demonstrated.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123017960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525825
G. Mennitto, M. Buehler
Introduces new compliance models for LADD (linear to angular displacement device) transmissions which reduce, by an order of magnitude, inelastic model errors of up to 18% full scale over force and position operating ranges. Elastic models introduced so far were all based on fiber elasticity, which show an increase an LADD length from the inelastic length with force. The authors show that in experiments the opposite is true. The LADD is always shorter than predicted from the inelastic model. As the load force increases, the LADD length approaches the inelastic length. The authors found the cause for this fundamentally different elastic behavior to be fiber bending. The authors also employ one of the new models to improve the prediction of the kinematics of a CLADD, which consists of two concentric LADD devices. The new LADD models are essential for the design of LADD based systems, the online estimation of LADD forces, and accurate control.
{"title":"Experimental validation of compliance models for LADD transmission kinematics","authors":"G. Mennitto, M. Buehler","doi":"10.1109/IROS.1995.525825","DOIUrl":"https://doi.org/10.1109/IROS.1995.525825","url":null,"abstract":"Introduces new compliance models for LADD (linear to angular displacement device) transmissions which reduce, by an order of magnitude, inelastic model errors of up to 18% full scale over force and position operating ranges. Elastic models introduced so far were all based on fiber elasticity, which show an increase an LADD length from the inelastic length with force. The authors show that in experiments the opposite is true. The LADD is always shorter than predicted from the inelastic model. As the load force increases, the LADD length approaches the inelastic length. The authors found the cause for this fundamentally different elastic behavior to be fiber bending. The authors also employ one of the new models to improve the prediction of the kinematics of a CLADD, which consists of two concentric LADD devices. The new LADD models are essential for the design of LADD based systems, the online estimation of LADD forces, and accurate control.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123555634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525797
J. Adams, R. Bajcsy, J. Kosecka, Vijay R. Kumar, R. Mandelbaum, M. Mintz, R. Paul, Curtis Wang, Y. Yamamoto, X. Yun
Presents a collaborative effort to design and implement a cooperative material handling system by a small team of human and robotic agents in an unstructured indoor environment. The authors' approach makes fundamental use of the human agents' expertise for aspects of task planning, task monitoring, and error recovery. The authors' system is neither fully autonomous nor fully teleoperated. It is designed to make effective use of the human's abilities within the present state of the art of autonomous systems. The authors' robotic agents refer to systems which are each equipped with at least one sensing modality and which possess some capability for self-orientation and/or mobility. The authors' robotic agents are not required to be homogeneous with respect to either capabilities or function. The authors' research stresses both paradigms and testbed experimentation. Theory issues include the requisite coordination principles and techniques which are fundamental to a cooperative multiagent system's basic functioning. The authors have constructed an experimental distributed multiagent-architecture testbed facility. The required modular components of this testbed are currently operational and have been tested individually. The authors' current research focuses on the agents' integration in a scenario for cooperative material handling.
{"title":"Cooperative material handling by human and robotic agents: module development and system synthesis","authors":"J. Adams, R. Bajcsy, J. Kosecka, Vijay R. Kumar, R. Mandelbaum, M. Mintz, R. Paul, Curtis Wang, Y. Yamamoto, X. Yun","doi":"10.1109/IROS.1995.525797","DOIUrl":"https://doi.org/10.1109/IROS.1995.525797","url":null,"abstract":"Presents a collaborative effort to design and implement a cooperative material handling system by a small team of human and robotic agents in an unstructured indoor environment. The authors' approach makes fundamental use of the human agents' expertise for aspects of task planning, task monitoring, and error recovery. The authors' system is neither fully autonomous nor fully teleoperated. It is designed to make effective use of the human's abilities within the present state of the art of autonomous systems. The authors' robotic agents refer to systems which are each equipped with at least one sensing modality and which possess some capability for self-orientation and/or mobility. The authors' robotic agents are not required to be homogeneous with respect to either capabilities or function. The authors' research stresses both paradigms and testbed experimentation. Theory issues include the requisite coordination principles and techniques which are fundamental to a cooperative multiagent system's basic functioning. The authors have constructed an experimental distributed multiagent-architecture testbed facility. The required modular components of this testbed are currently operational and have been tested individually. The authors' current research focuses on the agents' integration in a scenario for cooperative material handling.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123688734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525798
Tsuyoshi Suzuki, K. Yokota, H. Asama, H. Kaetsu, I. Endo
This paper first discusses the relation between the human operator and the decentralized autonomous robotic system. The operator relates himself to the system loosely in the decentralized autonomous system. The authors position the operator as a problem solver and a monitor of the system. The human operator is regarded as an agent in the decentralized autonomous robotic system. Then strategies of communication between the human operator and agents are discussed. The authors propose explicit and implicit communication strategies to monitor the system, and several monitoring methods to implement them: time-based and event-based monitoring for explicit communication and eavesdropping messages for implicit communication. The authors compare the monitoring methods in order to ascertain how much information the human operator can gather in each method using simulation. Finally, the characteristics of each monitoring method are analyzed.
{"title":"Cooperation between the human operator and the multi-agent robotic system: evaluation of agent monitoring methods for the human interface system","authors":"Tsuyoshi Suzuki, K. Yokota, H. Asama, H. Kaetsu, I. Endo","doi":"10.1109/IROS.1995.525798","DOIUrl":"https://doi.org/10.1109/IROS.1995.525798","url":null,"abstract":"This paper first discusses the relation between the human operator and the decentralized autonomous robotic system. The operator relates himself to the system loosely in the decentralized autonomous system. The authors position the operator as a problem solver and a monitor of the system. The human operator is regarded as an agent in the decentralized autonomous robotic system. Then strategies of communication between the human operator and agents are discussed. The authors propose explicit and implicit communication strategies to monitor the system, and several monitoring methods to implement them: time-based and event-based monitoring for explicit communication and eavesdropping messages for implicit communication. The authors compare the monitoring methods in order to ascertain how much information the human operator can gather in each method using simulation. Finally, the characteristics of each monitoring method are analyzed.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114061257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}