Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698445
Christian Reimund, A. Koenig, K. Kleinmann, A. Weigl-Seitz
A robot's joint rates increase when passing near-singular kinematic setups, increasing the required axes currents and wearout of the bearings. Exceeding a manipulator's axes acceleratory limits will abort its motion and therefore make an off-line-planned trajectory undriveable in a real setting; Latter also applies to hitting the axes' angular constraints. Given a desired trajectory and aiming for reducing joint rates, we discuss an application-oriented offline approach for multi-dimensional optimization of the tool definition using a heuristic method for an industrial R6 manipulator, a KUKA KR 60/2. We analyze the different kinds of singularities this robot's kinematic involves. As optimization criteria, we present a way to identify and quantify their impact on manipulability and prioritize optimization with respect to the arm and the wrist. We also show that decoupling may be applied for choosing an axes configuration that avoids hitting their angular constraints. The presented method is adapted to a plasma-cutting application. It avoids common approaches' draw-backs like pose or speed deviation or the need for an auxiliary axis.
{"title":"Application-oriented joint rate minimization by cell and tool optimization for an R6-manipulator","authors":"Christian Reimund, A. Koenig, K. Kleinmann, A. Weigl-Seitz","doi":"10.1109/ROSE.2013.6698445","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698445","url":null,"abstract":"A robot's joint rates increase when passing near-singular kinematic setups, increasing the required axes currents and wearout of the bearings. Exceeding a manipulator's axes acceleratory limits will abort its motion and therefore make an off-line-planned trajectory undriveable in a real setting; Latter also applies to hitting the axes' angular constraints. Given a desired trajectory and aiming for reducing joint rates, we discuss an application-oriented offline approach for multi-dimensional optimization of the tool definition using a heuristic method for an industrial R6 manipulator, a KUKA KR 60/2. We analyze the different kinds of singularities this robot's kinematic involves. As optimization criteria, we present a way to identify and quantify their impact on manipulability and prioritize optimization with respect to the arm and the wrist. We also show that decoupling may be applied for choosing an axes configuration that avoids hitting their angular constraints. The presented method is adapted to a plasma-cutting application. It avoids common approaches' draw-backs like pose or speed deviation or the need for an auxiliary axis.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117053114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698424
H. Sen, P. Kazanzides
Tracking systems are essential components for many computer assisted interventions because they enable the doctor to visualize anatomical information, derived from preoperative or intraoperative images, registered with respect to the actual patient anatomy. This paper presents two applications of Bayesian filters: Particle Filter (PF) and Extended Kalman Filter (EKF) to obtain accurate dynamic tracking performance from an electromagnetic tracking (EMT) system, even if the EMT cannot provide the full measurement state at each sampling interval (for example, when transmit coils are driven sequentially and/or receive coils are not sampled simultaneously). Experiments are performed with a custom EMT system, consisting of a transmitter coil array and one or more receiving coils, to demonstrate that the proposed method provides good dynamic tracking accuracy at different velocities.
{"title":"Bayesian filtering to improve the dynamic accuracy of electromagnetic tracking","authors":"H. Sen, P. Kazanzides","doi":"10.1109/ROSE.2013.6698424","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698424","url":null,"abstract":"Tracking systems are essential components for many computer assisted interventions because they enable the doctor to visualize anatomical information, derived from preoperative or intraoperative images, registered with respect to the actual patient anatomy. This paper presents two applications of Bayesian filters: Particle Filter (PF) and Extended Kalman Filter (EKF) to obtain accurate dynamic tracking performance from an electromagnetic tracking (EMT) system, even if the EMT cannot provide the full measurement state at each sampling interval (for example, when transmit coils are driven sequentially and/or receive coils are not sampled simultaneously). Experiments are performed with a custom EMT system, consisting of a transmitter coil array and one or more receiving coils, to demonstrate that the proposed method provides good dynamic tracking accuracy at different velocities.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127264711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698436
J. Collins, Chad C. Kessens, Stephen Biggs
Robots that operate in dynamic, unknown environments occasionally require error recovery methods to return to a preferred orientation for mobility (i.e. self-righting), thus preventing mission failure and enabling asset recovery. In this paper, we reduce to practice our previously developed framework for determining self-righting solutions for generic robots on sloped planar surfaces. We begin by briefly reviewing our framework. We then describe the development of a modular robot for examining the effectiveness of our framework. This robot utilizes only joint encoders and an inertial measurement unit (IMU) for sensing. Next, we test the fidelity of our sensors by comparing commanded values, sensor data, and ground truth as given by a Vicon motion capture sensor environment, yielding a baseline margin of error. We utilize this data to explore the robot's ability to determine unknown ground angles using only proprioceptive sensors in combination with a conformation space map, which is pre-computed using our framework. We then investigate the robot's ability to develop its own conformation space map experimentally, and compare it to the pre-computed map. Finally, we demonstrate the robot's ability to self-right on various ground angles using 1, 2, and 3 degrees of freedom.
{"title":"Proprioceptive sensing for autonomous self-righting on unknown sloped planar surfaces","authors":"J. Collins, Chad C. Kessens, Stephen Biggs","doi":"10.1109/ROSE.2013.6698436","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698436","url":null,"abstract":"Robots that operate in dynamic, unknown environments occasionally require error recovery methods to return to a preferred orientation for mobility (i.e. self-righting), thus preventing mission failure and enabling asset recovery. In this paper, we reduce to practice our previously developed framework for determining self-righting solutions for generic robots on sloped planar surfaces. We begin by briefly reviewing our framework. We then describe the development of a modular robot for examining the effectiveness of our framework. This robot utilizes only joint encoders and an inertial measurement unit (IMU) for sensing. Next, we test the fidelity of our sensors by comparing commanded values, sensor data, and ground truth as given by a Vicon motion capture sensor environment, yielding a baseline margin of error. We utilize this data to explore the robot's ability to determine unknown ground angles using only proprioceptive sensors in combination with a conformation space map, which is pre-computed using our framework. We then investigate the robot's ability to develop its own conformation space map experimentally, and compare it to the pre-computed map. Finally, we demonstrate the robot's ability to self-right on various ground angles using 1, 2, and 3 degrees of freedom.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133265310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698440
Hui Zhang, Joseph C. Chen, Kai Zhang
This paper proposes a precise and low cost localization system for mobile robots by using Radio Frequency Identification (RFID) module and RFID tags. The localization system consists of the RFID reader, the RFID tags, a mobile robot and an off-board computer. The proposed system is developed to solve the problems of localization and guiding for mobile robots at home, service, industrial and defense applications by tracking RFID tags, analyzing received signal strength indication and taking other metric into consideration. The critical metric is the velocity of mobile robot, which has significant influence on localization system and is the novel feature for the localization system. Multiple regression model and fuzzy logic are compared to locate the mobile robot. The experiment results demonstrated that the fuzzy logic system can be used to build localization system to locate mobile robots more accurately than multiple regression model. This paper illustrates the design and performance analysis of low cost and high accuracy localization system.
{"title":"Reliable and efficient RFID-based localization for mobile robot","authors":"Hui Zhang, Joseph C. Chen, Kai Zhang","doi":"10.1109/ROSE.2013.6698440","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698440","url":null,"abstract":"This paper proposes a precise and low cost localization system for mobile robots by using Radio Frequency Identification (RFID) module and RFID tags. The localization system consists of the RFID reader, the RFID tags, a mobile robot and an off-board computer. The proposed system is developed to solve the problems of localization and guiding for mobile robots at home, service, industrial and defense applications by tracking RFID tags, analyzing received signal strength indication and taking other metric into consideration. The critical metric is the velocity of mobile robot, which has significant influence on localization system and is the novel feature for the localization system. Multiple regression model and fuzzy logic are compared to locate the mobile robot. The experiment results demonstrated that the fuzzy logic system can be used to build localization system to locate mobile robots more accurately than multiple regression model. This paper illustrates the design and performance analysis of low cost and high accuracy localization system.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123398797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698413
R. Luo, Bo-Han Shih, Tsung-Wei Lin
This paper presented a real-time human motion imitation approach to control an anthropomorphic dual arm robot by human demonstration. We use the processed positions of human skeleton joints from Kinect sensor as commands directly to control the robot arms by using Cartesian impedance control to follow the human motion without solving inverse kinematics problem. In order to avoid a jerky robot arm motion, we apply an on-line trajectory generator algorithm to obtain a smooth movement trajectory by imposing the limit of velocity and acceleration. Moreover, the self-collision problem has also been considered. When the distance between two parts of body is close enough, a repulsive force will automatically generate to prevent collision. Taking the robot capability and safe issue into account, the output force is restricted to ensure that the action of robot is stable. We demonstrate the feasibility of the approach by implementing the human motion imitation system on a humanoid dual arm robot developed in our lab. The experimental results show that the system is in good practice and flexible enough to imitate various human motions.
{"title":"Real time human motion imitation of anthropomorphic dual arm robot based on Cartesian impedance control","authors":"R. Luo, Bo-Han Shih, Tsung-Wei Lin","doi":"10.1109/ROSE.2013.6698413","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698413","url":null,"abstract":"This paper presented a real-time human motion imitation approach to control an anthropomorphic dual arm robot by human demonstration. We use the processed positions of human skeleton joints from Kinect sensor as commands directly to control the robot arms by using Cartesian impedance control to follow the human motion without solving inverse kinematics problem. In order to avoid a jerky robot arm motion, we apply an on-line trajectory generator algorithm to obtain a smooth movement trajectory by imposing the limit of velocity and acceleration. Moreover, the self-collision problem has also been considered. When the distance between two parts of body is close enough, a repulsive force will automatically generate to prevent collision. Taking the robot capability and safe issue into account, the output force is restricted to ensure that the action of robot is stable. We demonstrate the feasibility of the approach by implementing the human motion imitation system on a humanoid dual arm robot developed in our lab. The experimental results show that the system is in good practice and flexible enough to imitate various human motions.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126677595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698419
Lei Shu, Hu Xu, May Huang
Laser scan matching algorithm plays a key role in robot localization and mapping. In this paper, we propose a classified feature-based algorithm that matches laser scans in a closed-form manner called Classified Feature-based Scan Matcher (CFSM). Based on a geometric observation, our classified features are defined as rotational features and translational features separately to improve matching accuracy. Experimental results demonstrate that CFSM can produce better accuracy for scans with large angular displacement, without increasing running time. Indoor robot can take advantage of this algorithm in performing fast and accurate pose estimation.
{"title":"High-speed and accurate laser scan matching using classified features","authors":"Lei Shu, Hu Xu, May Huang","doi":"10.1109/ROSE.2013.6698419","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698419","url":null,"abstract":"Laser scan matching algorithm plays a key role in robot localization and mapping. In this paper, we propose a classified feature-based algorithm that matches laser scans in a closed-form manner called Classified Feature-based Scan Matcher (CFSM). Based on a geometric observation, our classified features are defined as rotational features and translational features separately to improve matching accuracy. Experimental results demonstrate that CFSM can produce better accuracy for scans with large angular displacement, without increasing running time. Indoor robot can take advantage of this algorithm in performing fast and accurate pose estimation.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130567541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698412
T. Haidegger, Marcos E. Barreto, P. Gonçalves, M. Habib, S. V. Ragavan, Howard Li, Alberto Vaccarella, Roberta Perrone, Edson Prestes e Silva
Robots and robotics are becoming more complex and flexible, due to technological advancement, improved sensing capabilities and machine intelligence. Service robots target a wide range of applications, relying on advanced Human-Robot Interaction. Medical robotics is becoming a leading application area within, and the number of surgical, rehabilitation and hospital assistance robots is rising rapidly. However, the complexity of the medical environment has been a major barrier, preventing a wider use of robotic technology, thus mostly teleoperated, human-in-the-loop control solutions emerged so far. Providing smarter and better medical robots requires a systematic approach in describing and translating human processes for the robots. It is believed that ontologies can bridge human cognitive understanding and robotic reasoning (machine intelligence). Besides, ontologies serve as a tool and method to assess the added value robotic technology brings into the medical environment. The purpose of this paper is to identify relevant ontology research in medical robotic, and to review the state-of-the-art. It focuses on the surgical domain, fundamental terminology and interactions are described for two example applications in neurosurgery and orthopaedics.
{"title":"Robot ontologies for sensor- and Image-guided surgery","authors":"T. Haidegger, Marcos E. Barreto, P. Gonçalves, M. Habib, S. V. Ragavan, Howard Li, Alberto Vaccarella, Roberta Perrone, Edson Prestes e Silva","doi":"10.1109/ROSE.2013.6698412","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698412","url":null,"abstract":"Robots and robotics are becoming more complex and flexible, due to technological advancement, improved sensing capabilities and machine intelligence. Service robots target a wide range of applications, relying on advanced Human-Robot Interaction. Medical robotics is becoming a leading application area within, and the number of surgical, rehabilitation and hospital assistance robots is rising rapidly. However, the complexity of the medical environment has been a major barrier, preventing a wider use of robotic technology, thus mostly teleoperated, human-in-the-loop control solutions emerged so far. Providing smarter and better medical robots requires a systematic approach in describing and translating human processes for the robots. It is believed that ontologies can bridge human cognitive understanding and robotic reasoning (machine intelligence). Besides, ontologies serve as a tool and method to assess the added value robotic technology brings into the medical environment. The purpose of this paper is to identify relevant ontology research in medical robotic, and to review the state-of-the-art. It focuses on the surgical domain, fundamental terminology and interactions are described for two example applications in neurosurgery and orthopaedics.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134303892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698421
Jordan Tremblay-Gosselin, A. Crétu
The paper investigates a novel approach for building identification in aerial images, that combines a classical segmentation algorithm, the region growing algorithm, a user guided training approach and a supervised learning solution based on support-vector machines. The user is guiding the training procedure by choosing points on the surface of objects of interest, e.g. buildings, as well as points over objects that are of no interest for the application, e.g. streets or vegetation. A local region growing algorithm is applied at each location chosen by the user. The system then prompts the user to label the type of object he/she selected. At the same time, a global region-growing algorithm is applied at uniformly spread seeds over the image and the resulting regions are combined. A series of features based on shape are then built for each region and a support-vector machine is trained to classify between objects of interest versus objects of no interest. The proposed solution obtains results in line in terms of recall and better in terms of precision than those reported in the remote sensing literature.
{"title":"A supervised training and learning method for building identification in remotely sensed imaging","authors":"Jordan Tremblay-Gosselin, A. Crétu","doi":"10.1109/ROSE.2013.6698421","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698421","url":null,"abstract":"The paper investigates a novel approach for building identification in aerial images, that combines a classical segmentation algorithm, the region growing algorithm, a user guided training approach and a supervised learning solution based on support-vector machines. The user is guiding the training procedure by choosing points on the surface of objects of interest, e.g. buildings, as well as points over objects that are of no interest for the application, e.g. streets or vegetation. A local region growing algorithm is applied at each location chosen by the user. The system then prompts the user to label the type of object he/she selected. At the same time, a global region-growing algorithm is applied at uniformly spread seeds over the image and the resulting regions are combined. A series of features based on shape are then built for each region and a support-vector machine is trained to classify between objects of interest versus objects of no interest. The proposed solution obtains results in line in terms of recall and better in terms of precision than those reported in the remote sensing literature.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129784764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698444
A. Winkler, J. Suchy
This paper deals with controller features which improve force controlled contour following. This kind of robot force control may by used in surface finishing tasks like polishing, deburring or grinding. The already introduced proportional controller with positive position feedback brought very good results in force control of a position controlled robot in both impact and contact phase. If the characteristics of the environment which should be finished with the robot tool are not constant, unfavorable contact forces may occur which can damage the tool or the workpiece. For the purpose of adapting the current inclination angle between robot end-effector and environment we investigate the insertion of an additional integrator into the force controller. Thereby, it will be possible to reduce static control errors. However, more attention should be paid during its parameterization with respect to the stability boundary of the closed loop control. Another novel feature in this paper is the variation of the end-effector velocity as a function of the force control error. With this idea force peaks can be decreased or the loss of contact between robot end-effector and environment can be avoided, e.g. when the inclination angle of the environment changes. All algorithms proposed in this paper are successfully verified by practical experiments.
{"title":"Force controlled contour following on unknown objects with an industrial robot","authors":"A. Winkler, J. Suchy","doi":"10.1109/ROSE.2013.6698444","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698444","url":null,"abstract":"This paper deals with controller features which improve force controlled contour following. This kind of robot force control may by used in surface finishing tasks like polishing, deburring or grinding. The already introduced proportional controller with positive position feedback brought very good results in force control of a position controlled robot in both impact and contact phase. If the characteristics of the environment which should be finished with the robot tool are not constant, unfavorable contact forces may occur which can damage the tool or the workpiece. For the purpose of adapting the current inclination angle between robot end-effector and environment we investigate the insertion of an additional integrator into the force controller. Thereby, it will be possible to reduce static control errors. However, more attention should be paid during its parameterization with respect to the stability boundary of the closed loop control. Another novel feature in this paper is the variation of the end-effector velocity as a function of the force control error. With this idea force peaks can be decreased or the loss of contact between robot end-effector and environment can be avoided, e.g. when the inclination angle of the environment changes. All algorithms proposed in this paper are successfully verified by practical experiments.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133586588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-01DOI: 10.1109/ROSE.2013.6698434
Jonathan Francis, Utsav Drolia, Kunal Mankodiya, Rolando Martins, R. Gandhi, P. Narasimhan
The ever-increasing popularity of online stores is reshaping traditional commerce models. In particular, brick-and-mortar stores are presently facing the challenge of reinventing themselves and their business models to offer attractive yet low-cost alternatives to e-commerce. Other industries have already introduced new concepts to fight inefficiency (i.e., “Just-in-Time” inventory management in Automotive), retail stores face a more challenging environment which these models cannot accommodate. Stores remain heavily vested in battling the overhead costs of personnel management when, instead, a robotic automation scheme with retail-oriented behaviors could reduce the detection latency of out-of-stock and compliance error phenomena throughout the store. These behaviors must be automated, multi-purpose, and schedulable; they must also ensure that the robot coordinates store nuances to adapt its functionality appropriately. In this paper, we present our architecture that defines retail robot behaviors as a collection of reusable activities, which, when permuted various ways, allows for various high-level and application-specific tasks to be accomplished effectively. We evaluate this system on our robotic platform by scrutinizing the integrity of navigation and machine vision tasks, which we perform concurrently in an experimental store setup. Results show the feasibility and efficiency of our proposed architecture.
{"title":"MetaBot: Automated and dynamically schedulable robotic behaviors in retail environments","authors":"Jonathan Francis, Utsav Drolia, Kunal Mankodiya, Rolando Martins, R. Gandhi, P. Narasimhan","doi":"10.1109/ROSE.2013.6698434","DOIUrl":"https://doi.org/10.1109/ROSE.2013.6698434","url":null,"abstract":"The ever-increasing popularity of online stores is reshaping traditional commerce models. In particular, brick-and-mortar stores are presently facing the challenge of reinventing themselves and their business models to offer attractive yet low-cost alternatives to e-commerce. Other industries have already introduced new concepts to fight inefficiency (i.e., “Just-in-Time” inventory management in Automotive), retail stores face a more challenging environment which these models cannot accommodate. Stores remain heavily vested in battling the overhead costs of personnel management when, instead, a robotic automation scheme with retail-oriented behaviors could reduce the detection latency of out-of-stock and compliance error phenomena throughout the store. These behaviors must be automated, multi-purpose, and schedulable; they must also ensure that the robot coordinates store nuances to adapt its functionality appropriately. In this paper, we present our architecture that defines retail robot behaviors as a collection of reusable activities, which, when permuted various ways, allows for various high-level and application-specific tasks to be accomplished effectively. We evaluate this system on our robotic platform by scrutinizing the integrity of navigation and machine vision tasks, which we perform concurrently in an experimental store setup. Results show the feasibility and efficiency of our proposed architecture.","PeriodicalId":187001,"journal":{"name":"2013 IEEE International Symposium on Robotic and Sensors Environments (ROSE)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122932749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}