Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525802
D. Rus, B. Donald, J. Jennings
The authors wish to organize furniture in a room with a team of robots that can push objects. The authors show how coordinated pushing by robots can change the pose (position and orientation) of objects and then they ask whether planning, global control, and explicit communication are necessary for cooperatively changing the pose of objects. The authors answer in the negative and present, as witnesses, four cooperative manipulation protocols that use different amounts of state, sensing, and communication. The authors analyze these protocols in the information invariant framework. The authors formalize the notion of resource tradeoffs for robot protocols and give the tradeoffs for the specific protocols discussed here.
{"title":"Moving furniture with teams of autonomous robots","authors":"D. Rus, B. Donald, J. Jennings","doi":"10.1109/IROS.1995.525802","DOIUrl":"https://doi.org/10.1109/IROS.1995.525802","url":null,"abstract":"The authors wish to organize furniture in a room with a team of robots that can push objects. The authors show how coordinated pushing by robots can change the pose (position and orientation) of objects and then they ask whether planning, global control, and explicit communication are necessary for cooperatively changing the pose of objects. The authors answer in the negative and present, as witnesses, four cooperative manipulation protocols that use different amounts of state, sensing, and communication. The authors analyze these protocols in the information invariant framework. The authors formalize the notion of resource tradeoffs for robot protocols and give the tradeoffs for the specific protocols discussed here.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114184526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.526171
G. Dudek, M. Jenkin, E. Milios, D. Wilkes
This paper deals with coordinating behaviour in a multi-autonomous robot system. When two or more autonomous robots must interact in order to accomplish some common goal, communication between the robots is essential. Different inter-robot communications strategies give rise to different overall system performance and reliability. After a brief consideration of some theoretical approaches to multiple robot collections, we present concrete implementations of different strategies for convoy-like behaviour. The convoy system is based around two RWI B12 mobile robots and uses only passive visual sensing for inter-robot communication. The issues related to different communication strategies are considered.
{"title":"Experiments in sensing and communication for robot convoy navigation","authors":"G. Dudek, M. Jenkin, E. Milios, D. Wilkes","doi":"10.1109/IROS.1995.526171","DOIUrl":"https://doi.org/10.1109/IROS.1995.526171","url":null,"abstract":"This paper deals with coordinating behaviour in a multi-autonomous robot system. When two or more autonomous robots must interact in order to accomplish some common goal, communication between the robots is essential. Different inter-robot communications strategies give rise to different overall system performance and reliability. After a brief consideration of some theoretical approaches to multiple robot collections, we present concrete implementations of different strategies for convoy-like behaviour. The convoy system is based around two RWI B12 mobile robots and uses only passive visual sensing for inter-robot communication. The issues related to different communication strategies are considered.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116171205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525893
M. R. Tremblay, M. Cutkosky
We present an approach to event detection during a dexterous manipulation task. The approach utilizes a combination of tactile sensors as well as contextual information. The manipulation task is decomposed into distinct phases, each of which is associated with a limited number of feasible events such as making or breaking contact, slipping, etc. A set of context-based and sensor-based features is associated with each possible event for each type of manipulation phase. The goal is to detect events as reliably and as rapidly as possible. At any time during a task, each possible event is assigned a confidence value between 0 and 1. This indicates how confident the detection scheme is that a given event could be occurring at that instant. A high-level controller can then make use of this information to determine when to switch to a different manipulation phase.
{"title":"Using sensor fusion and contextual information to perform event detection during a phase-based manipulation task","authors":"M. R. Tremblay, M. Cutkosky","doi":"10.1109/IROS.1995.525893","DOIUrl":"https://doi.org/10.1109/IROS.1995.525893","url":null,"abstract":"We present an approach to event detection during a dexterous manipulation task. The approach utilizes a combination of tactile sensors as well as contextual information. The manipulation task is decomposed into distinct phases, each of which is associated with a limited number of feasible events such as making or breaking contact, slipping, etc. A set of context-based and sensor-based features is associated with each possible event for each type of manipulation phase. The goal is to detect events as reliably and as rapidly as possible. At any time during a task, each possible event is assigned a confidence value between 0 and 1. This indicates how confident the detection scheme is that a given event could be occurring at that instant. A high-level controller can then make use of this information to determine when to switch to a different manipulation phase.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127316603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525811
D. Chiu, Sukhan Lee
This paper presents a novel impact controller which is robust to the uncertainties in environment dynamics and location of the collision surface. By modeling the impact dynamics as a state dependent jump linear system and applying a modified version of the stochastic maximum principle of a state dependent jump linear system, the controller thus obtained optimizes, in the mean square sense, the approach velocity, the force transient during impact and the steady state force error after contact is established. Compared with impedance controllers via simulation whose data is obtained from experimental data the results indicate that not only is the performance of the jump impact controller superior in terms of overshoot and steady state error its robustness is quite remarkable too.
{"title":"Robust jump impact controller for manipulators","authors":"D. Chiu, Sukhan Lee","doi":"10.1109/IROS.1995.525811","DOIUrl":"https://doi.org/10.1109/IROS.1995.525811","url":null,"abstract":"This paper presents a novel impact controller which is robust to the uncertainties in environment dynamics and location of the collision surface. By modeling the impact dynamics as a state dependent jump linear system and applying a modified version of the stochastic maximum principle of a state dependent jump linear system, the controller thus obtained optimizes, in the mean square sense, the approach velocity, the force transient during impact and the steady state force error after contact is established. Compared with impedance controllers via simulation whose data is obtained from experimental data the results indicate that not only is the performance of the jump impact controller superior in terms of overshoot and steady state error its robustness is quite remarkable too.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127364484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.526140
E. Martin, E. Papadopoulos, J. Angeles
Space manipulators mounted on an on-off thruster-controlled base are envisioned to assist in the assembly and maintenance of space structures. When handling large payloads, manipulator joint and link flexibility become important for it can result in payload-attitude controller fuel-replenishing dynamic interactions. In this paper, the dynamic behavior of a flexible-joint manipulator on a free-flying base is approximated by a single-mode mechanical system, while its parameters are matched with available space-manipulator data. Describing functions are used to predict the dynamic performance of three alternative controller/estimator schemes, and to conduct a parametric study on the influence of key system parameters. Design guidelines and a particular state-estimator are suggested that can minimize such undesirable dynamic interactions as well as thruster fuel consumption.
{"title":"On the interaction of flexible modes and on-off thrusters in space robotic systems","authors":"E. Martin, E. Papadopoulos, J. Angeles","doi":"10.1109/IROS.1995.526140","DOIUrl":"https://doi.org/10.1109/IROS.1995.526140","url":null,"abstract":"Space manipulators mounted on an on-off thruster-controlled base are envisioned to assist in the assembly and maintenance of space structures. When handling large payloads, manipulator joint and link flexibility become important for it can result in payload-attitude controller fuel-replenishing dynamic interactions. In this paper, the dynamic behavior of a flexible-joint manipulator on a free-flying base is approximated by a single-mode mechanical system, while its parameters are matched with available space-manipulator data. Describing functions are used to predict the dynamic performance of three alternative controller/estimator schemes, and to conduct a parametric study on the influence of key system parameters. Design guidelines and a particular state-estimator are suggested that can minimize such undesirable dynamic interactions as well as thruster fuel consumption.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127066116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525822
E. Miles, R. Cannon
A novel approach to directing a highly autonomous robot operating in a semi-structured environment is presented. In this approach, the human operator assists the robot in perceiving unexpected situations in the environment through simple point-and-click type interaction with a live video display from cameras on-board the robot. As a result of this high-level guidance, the robot is now able to invoke a variety of computer vision algorithms to augment the world model accordingly. This novel approach utilizes the complimentary vision capabilities of both the human and computer to extend the capability of the human/robot team to overcome the challenges of semi-structured environments without sacrificing the high-degree of autonomy and resilience to time delay of the task-level command architecture. Preliminary experimental results with a laboratory robot are presented.
{"title":"Utilizing human vision and computer vision to direct a robot in a semi-structured environment via task-level commands","authors":"E. Miles, R. Cannon","doi":"10.1109/IROS.1995.525822","DOIUrl":"https://doi.org/10.1109/IROS.1995.525822","url":null,"abstract":"A novel approach to directing a highly autonomous robot operating in a semi-structured environment is presented. In this approach, the human operator assists the robot in perceiving unexpected situations in the environment through simple point-and-click type interaction with a live video display from cameras on-board the robot. As a result of this high-level guidance, the robot is now able to invoke a variety of computer vision algorithms to augment the world model accordingly. This novel approach utilizes the complimentary vision capabilities of both the human and computer to extend the capability of the human/robot team to overcome the challenges of semi-structured environments without sacrificing the high-degree of autonomy and resilience to time delay of the task-level command architecture. Preliminary experimental results with a laboratory robot are presented.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125173647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525908
M. Iskarous, K. Kawamura
Intelligent control techniques have emerged to overcome some deficiencies in conventional control methods in dealing with complex real-world systems. These problems include knowledge adaptation, learning, and expert knowledge incorporation. In this paper, a hybrid network that combines fuzzy inferencing and neural networks is used to model and to control complex dynamic systems. The network takes advantage of the learning algorithms developed for neural networks to generate the knowledge base used in fuzzy inferencing. The network as used to model and to control a robot arm with flexible pneumatic actuator. Comparison with a nonlinear control technique used for the robot joints is also presented.
{"title":"Intelligent control using a neuro-fuzzy network","authors":"M. Iskarous, K. Kawamura","doi":"10.1109/IROS.1995.525908","DOIUrl":"https://doi.org/10.1109/IROS.1995.525908","url":null,"abstract":"Intelligent control techniques have emerged to overcome some deficiencies in conventional control methods in dealing with complex real-world systems. These problems include knowledge adaptation, learning, and expert knowledge incorporation. In this paper, a hybrid network that combines fuzzy inferencing and neural networks is used to model and to control complex dynamic systems. The network takes advantage of the learning algorithms developed for neural networks to generate the knowledge base used in fuzzy inferencing. The network as used to model and to control a robot arm with flexible pneumatic actuator. Comparison with a nonlinear control technique used for the robot joints is also presented.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123477324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525854
R. Voyles, P. Khosla
Gesture-based programming is a new paradigm to ease the burden of programming robots. By tapping in to the user's wealth of experience with contact transitions, compliance, uncertainty and operations sequencing, we hope to provide a more intuitive programming environment for complex, real-world tasks based on the expressiveness of nonverbal communication. A requirement for this to be accomplished is the ability to interpret gestures to infer the intentions behind them. As a first step toward this goal, this paper presents an application of distributed perception for inferring a user's intentions by observing tactile gestures. These gestures consist of sparse, inexact, physical "nudges" applied to the robot's end effector for the purpose of modifying its trajectory in free space. A set of independent agents-each with its own local, fuzzified, heuristic model of a particular trajectory parameter observes data from a wristforce/torque sensor to evaluate the gestures. The agents then independently determine the confidence of their respective findings and distributed arbitration resolves the interpretation through voting.
{"title":"Tactile gestures for human/robot interaction","authors":"R. Voyles, P. Khosla","doi":"10.1109/IROS.1995.525854","DOIUrl":"https://doi.org/10.1109/IROS.1995.525854","url":null,"abstract":"Gesture-based programming is a new paradigm to ease the burden of programming robots. By tapping in to the user's wealth of experience with contact transitions, compliance, uncertainty and operations sequencing, we hope to provide a more intuitive programming environment for complex, real-world tasks based on the expressiveness of nonverbal communication. A requirement for this to be accomplished is the ability to interpret gestures to infer the intentions behind them. As a first step toward this goal, this paper presents an application of distributed perception for inferring a user's intentions by observing tactile gestures. These gestures consist of sparse, inexact, physical \"nudges\" applied to the robot's end effector for the purpose of modifying its trajectory in free space. A set of independent agents-each with its own local, fuzzified, heuristic model of a particular trajectory parameter observes data from a wristforce/torque sensor to evaluate the gestures. The agents then independently determine the confidence of their respective findings and distributed arbitration resolves the interpretation through voting.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121981322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525780
R. Durst, E. Krotkov
We address the problem of autonomously classifying objects from the sounds they make when struck, and present results from different attempts to classify various items. We extract the two most significant spikes in the frequency domain as features, and show that accurate object classification based on these features is possible. Two techniques are discussed: a minimum-distance classifier and a hybrid minimum-distance/decision-tree classifier. Results from classifier trials show that object classification using the hybrid classifier can be done as accurately as using the minimum-distance classifier, but at lower computational expense.
{"title":"Object classification from analysis of impact acoustics","authors":"R. Durst, E. Krotkov","doi":"10.1109/IROS.1995.525780","DOIUrl":"https://doi.org/10.1109/IROS.1995.525780","url":null,"abstract":"We address the problem of autonomously classifying objects from the sounds they make when struck, and present results from different attempts to classify various items. We extract the two most significant spikes in the frequency domain as features, and show that accurate object classification based on these features is possible. Two techniques are discussed: a minimum-distance classifier and a hybrid minimum-distance/decision-tree classifier. Results from classifier trials show that object classification using the hybrid classifier can be done as accurately as using the minimum-distance classifier, but at lower computational expense.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131568582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-08-05DOI: 10.1109/IROS.1995.525912
W. Klarquist, W. Geisler, A. Bovik
A new method for actively recovering depth information using image defocus is demonstrated and shown to support active stereo vision depth recovery by providing monocular depth estimates to guide the positioning of cameras for stereo processing. This active depth-from-defocus approach employs a spatial frequency model for image defocus which incorporates the optical transfer function of the image acquisition system and a maximum likelihood estimator to determine the amount of defocus present in a sequence of two or more images taken from the same pose. This defocus estimate is translated into a measurement of depth and associated uncertainty that is used to control the positioning of a variable baseline stereo camera system. This cooperative arrangement significantly reduces the matching uncertainty of the stereo correspondence process and increases the depth resolution obtainable with an active stereo vision platform.
{"title":"Maximum-likelihood depth-from-defocus for active vision","authors":"W. Klarquist, W. Geisler, A. Bovik","doi":"10.1109/IROS.1995.525912","DOIUrl":"https://doi.org/10.1109/IROS.1995.525912","url":null,"abstract":"A new method for actively recovering depth information using image defocus is demonstrated and shown to support active stereo vision depth recovery by providing monocular depth estimates to guide the positioning of cameras for stereo processing. This active depth-from-defocus approach employs a spatial frequency model for image defocus which incorporates the optical transfer function of the image acquisition system and a maximum likelihood estimator to determine the amount of defocus present in a sequence of two or more images taken from the same pose. This defocus estimate is translated into a measurement of depth and associated uncertainty that is used to control the positioning of a variable baseline stereo camera system. This cooperative arrangement significantly reduces the matching uncertainty of the stereo correspondence process and increases the depth resolution obtainable with an active stereo vision platform.","PeriodicalId":124483,"journal":{"name":"Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126317760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}