Many robot systems incorporate six-axis force/torque sensors to enable compliant interaction with the environment, even when a lower-cost three-axis sensor may be sufficient. One challenge is that a three-axis sensor may measure a coupling of the applied forces and torques; these can be decoupled with an appropriate calibration and with assumed knowledge of the location of the applied force. In this paper, we develop the method and open source software to calibrate a commercially-available three-axis sensor and verify its performance in static tests with known weights and in dynamic tests by comparison to an accurate six-axis sensor. Mean errors in static tests are less than 5% and experiments demonstrate that the sensor can be used to control the contact force applied by a robot-held ultrasound probe.
{"title":"Integration of a Low-Cost Three-Axis Sensor for Robot Force Control","authors":"Shuyang Chen, Jianren Wang, P. Kazanzides","doi":"10.1109/IRC.2018.00052","DOIUrl":"https://doi.org/10.1109/IRC.2018.00052","url":null,"abstract":"Many robot systems incorporate six-axis force/torque sensors to enable compliant interaction with the environment, even when a lower-cost three-axis sensor may be sufficient. One challenge is that a three-axis sensor may measure a coupling of the applied forces and torques; these can be decoupled with an appropriate calibration and with assumed knowledge of the location of the applied force. In this paper, we develop the method and open source software to calibrate a commercially-available three-axis sensor and verify its performance in static tests with known weights and in dynamic tests by comparison to an accurate six-axis sensor. Mean errors in static tests are less than 5% and experiments demonstrate that the sensor can be used to control the contact force applied by a robot-held ultrasound probe.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116514052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A design experience involving the transformation of a retired security robot into a social robot that safely serves refreshments to party guests is presented. The design is based on a security robot from the 1990's. The robot has the physical form of a bar table on wheels. Simplicity of interaction is emphasized in the design. A three-state behavior is used: if it detects drinks and refreshments on its tabletop then it wanders the room in search of people using a phonotaxis approach. If the drinks and refreshments have been removed then it returns to a designated corner of the room for reloading. It uses a lidar sensor to avoid collisions with obstacles and people. The robot serves as an interesting research platform for human-robot interaction.
{"title":"Lurch: The Social Robot that can Wait","authors":"D. Claveau, S. Force","doi":"10.1109/IRC.2018.00037","DOIUrl":"https://doi.org/10.1109/IRC.2018.00037","url":null,"abstract":"A design experience involving the transformation of a retired security robot into a social robot that safely serves refreshments to party guests is presented. The design is based on a security robot from the 1990's. The robot has the physical form of a bar table on wheels. Simplicity of interaction is emphasized in the design. A three-state behavior is used: if it detects drinks and refreshments on its tabletop then it wanders the room in search of people using a phonotaxis approach. If the drinks and refreshments have been removed then it returns to a designated corner of the room for reloading. It uses a lidar sensor to avoid collisions with obstacles and people. The robot serves as an interesting research platform for human-robot interaction.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126555413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johannes Wienke, D. L. Wigand, N. Köster, S. Wrede
In complex technical systems like robotics platforms, a manifold of issues can impair their dependability. While common testing and simulation methods largely focus on functional aspects, the utilization of resources like CPU, network bandwidth, or memory is only rarely tested systematically. With this contribution we propose a novel Domain-Specific Language (DSL) for modeling performance tests for individual robotics components with the aim to establish a systematic testing process for detecting regressions regarding the resource utilization. This DSL builds upon a testing framework from previous research and aims to significantly reduce the effort and complexity for creating performance tests. The DSL is built using the MPS language workbench and provides a feature-rich editor with modern editing aids. An evaluation indicates that developing performance tests requires only one third of the work in comparison to the original Java-based API.
{"title":"Model-Based Performance Testing for Robotics Software Components","authors":"Johannes Wienke, D. L. Wigand, N. Köster, S. Wrede","doi":"10.1109/IRC.2018.00013","DOIUrl":"https://doi.org/10.1109/IRC.2018.00013","url":null,"abstract":"In complex technical systems like robotics platforms, a manifold of issues can impair their dependability. While common testing and simulation methods largely focus on functional aspects, the utilization of resources like CPU, network bandwidth, or memory is only rarely tested systematically. With this contribution we propose a novel Domain-Specific Language (DSL) for modeling performance tests for individual robotics components with the aim to establish a systematic testing process for detecting regressions regarding the resource utilization. This DSL builds upon a testing framework from previous research and aims to significantly reduce the effort and complexity for creating performance tests. The DSL is built using the MPS language workbench and provides a feature-rich editor with modern editing aids. An evaluation indicates that developing performance tests requires only one third of the work in comparison to the original Java-based API.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122811737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abel Hailemichael, S. Gebreyohannes, A. Karimoddini, Kaushik Roy, A. Homaifar
Since fuzzy logic controllers (FLCs) can handle complex systems without knowing much about the systems' mathematical model, they are widely used for a range of robotic control applications. Further, the ability of FLCs (particularly, type-2 FLCs) to effectively capture and accommodate uncertainties has made them one of the suitable choices for implementing robotic control applications in uncertain environments. However, developing type-1 and type-2 FLCs for real-time robotic control applications is relatively more challenging than developing traditional controllers such as PID controllers. The reason is, the fuzzy logic calculations involved are more complex and not much tools have been developed to assist FLC application developers. In this paper, therefore, using an object-oriented approach and unified model language (UML), we demonstrate a systematic approach for developing a new generic and configurable fuzzy logic system (FLS) library that eases the implementation of real-time type-1 and interval type-2 FLC applications based on both Mamdani and Takagi-Sugeno-Kang (TSK) inference mechanisms. To evaluate the developed library, we have implemented it for the interval type-2 TSK fuzzy logic altitude control of a quadcopter unmanned aerial vehicle (UAV). The response of this fuzzy logic controller is then compared with the response of a classical PD controller.
{"title":"Development of a Generic and Configurable Fuzzy Logic Systems Library for Real-Time Control Applications Using an Object-Oriented Approach","authors":"Abel Hailemichael, S. Gebreyohannes, A. Karimoddini, Kaushik Roy, A. Homaifar","doi":"10.1109/IRC.2018.00032","DOIUrl":"https://doi.org/10.1109/IRC.2018.00032","url":null,"abstract":"Since fuzzy logic controllers (FLCs) can handle complex systems without knowing much about the systems' mathematical model, they are widely used for a range of robotic control applications. Further, the ability of FLCs (particularly, type-2 FLCs) to effectively capture and accommodate uncertainties has made them one of the suitable choices for implementing robotic control applications in uncertain environments. However, developing type-1 and type-2 FLCs for real-time robotic control applications is relatively more challenging than developing traditional controllers such as PID controllers. The reason is, the fuzzy logic calculations involved are more complex and not much tools have been developed to assist FLC application developers. In this paper, therefore, using an object-oriented approach and unified model language (UML), we demonstrate a systematic approach for developing a new generic and configurable fuzzy logic system (FLS) library that eases the implementation of real-time type-1 and interval type-2 FLC applications based on both Mamdani and Takagi-Sugeno-Kang (TSK) inference mechanisms. To evaluate the developed library, we have implemented it for the interval type-2 TSK fuzzy logic altitude control of a quadcopter unmanned aerial vehicle (UAV). The response of this fuzzy logic controller is then compared with the response of a classical PD controller.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128683067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel sign language learning method which employs region of interest (ROI) segmentation preprocessing of input data through an object detection network. As the input, 2D image frames are sampled and concatenated into a wide image. From the image, ROI is segmented by detecting and extracting the area of hands, crucial information in sign language. The hand area detection process is implemented with a well-known object detection network, you only look once (YOLO) and the sign language learning is implemented with a convolutional neural network (CNN). 12 sign gestures are tested through a 2D camera. The results show that, compared to the method without ROI segmentation, the accuracy is increased by 12% (from 86% to 98%) as well as the training time is reduced by over 50%. Above all, through the pretrained hand features, it has the advantage of ease in adding more sign gestures to learn.
{"title":"An Effective Sign Language Learning with Object Detection Based ROI Segmentation","authors":"Sunmok Kim, Y. Ji, Ki-Baek Lee","doi":"10.1109/IRC.2018.00069","DOIUrl":"https://doi.org/10.1109/IRC.2018.00069","url":null,"abstract":"This paper proposes a novel sign language learning method which employs region of interest (ROI) segmentation preprocessing of input data through an object detection network. As the input, 2D image frames are sampled and concatenated into a wide image. From the image, ROI is segmented by detecting and extracting the area of hands, crucial information in sign language. The hand area detection process is implemented with a well-known object detection network, you only look once (YOLO) and the sign language learning is implemented with a convolutional neural network (CNN). 12 sign gestures are tested through a 2D camera. The results show that, compared to the method without ROI segmentation, the accuracy is increased by 12% (from 86% to 98%) as well as the training time is reduced by over 50%. Above all, through the pretrained hand features, it has the advantage of ease in adding more sign gestures to learn.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128761898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the demand for increasing automation within specified tasks by a large spectrum of different users, software development is still a complex task mainly oriented to professional programmers. Often, the exploration and understanding of large code bases is also a difficult task for experienced developers. Recently, semantic parsers and advances in research areas primarily investigated within the field of natural language human-robot interaction, have shown to be potentially useful for end-user development supported by natural language communication. Hence, this paper provides a structured review and categorization of approaches to ease software development, both for professional software programmers and for end-users with no prior programming skills. We then focus on semantic developments based on natural language understanding and on a comparison between the outlined approaches.
{"title":"Towards Semantic Approaches for General-Purpose End-User Development","authors":"Mattia Atzeni, M. Atzori","doi":"10.1109/IRC.2018.00077","DOIUrl":"https://doi.org/10.1109/IRC.2018.00077","url":null,"abstract":"Despite the demand for increasing automation within specified tasks by a large spectrum of different users, software development is still a complex task mainly oriented to professional programmers. Often, the exploration and understanding of large code bases is also a difficult task for experienced developers. Recently, semantic parsers and advances in research areas primarily investigated within the field of natural language human-robot interaction, have shown to be potentially useful for end-user development supported by natural language communication. Hence, this paper provides a structured review and categorization of approaches to ease software development, both for professional software programmers and for end-users with no prior programming skills. We then focus on semantic developments based on natural language understanding and on a comparison between the outlined approaches.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115283288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas Dolan-Stern, Kevin Scrivnor, Jason T. Isaacs
Implementations of central place foraging using multi-robot systems need efficient mechanisms for searching for resources and gathering them in a central home-nest location. We propose an approach that partitions the search space and assigns agents multiple behavioral roles inspired by honey bee colonies as a way of organizing the search and gather operation. Through simulation we demonstrate that this approach minimizes spatio-temporal congestion that results from many robots sharing a common space near the home-nest. We compare our role based algorithm to the Distributed Deterministic Spiral Search Algorithm (DDSA) in a high fidelity simulation environment built using ROS and Gazebo.
{"title":"Multimodal Central Place Foraging","authors":"Nicholas Dolan-Stern, Kevin Scrivnor, Jason T. Isaacs","doi":"10.1109/IRC.2018.00019","DOIUrl":"https://doi.org/10.1109/IRC.2018.00019","url":null,"abstract":"Implementations of central place foraging using multi-robot systems need efficient mechanisms for searching for resources and gathering them in a central home-nest location. We propose an approach that partitions the search space and assigns agents multiple behavioral roles inspired by honey bee colonies as a way of organizing the search and gather operation. Through simulation we demonstrate that this approach minimizes spatio-temporal congestion that results from many robots sharing a common space near the home-nest. We compare our role based algorithm to the Distributed Deterministic Spiral Search Algorithm (DDSA) in a high fidelity simulation environment built using ROS and Gazebo.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125174191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Wiemann, Isaak Mitschke, Alexander Mock, J. Hertzberg
Generating 3D robotic maps from point cloud data is an active field of research. To handle high resolution data from terrestrial laser scanning to generate maps for mobile robots is still challenging, especially for city scale environments. In this short paper, we present the results of an approach for surface reconstruction from arbitrarily large point clouds. To achieve this, we serialize the large input data into suitable chunks, that are serialized to a shared hard drive. After computation, the partial results are fused into a globally consistent reconstruction.
{"title":"Surface Reconstruction from Arbitrarily Large Point Clouds","authors":"T. Wiemann, Isaak Mitschke, Alexander Mock, J. Hertzberg","doi":"10.1109/IRC.2018.00059","DOIUrl":"https://doi.org/10.1109/IRC.2018.00059","url":null,"abstract":"Generating 3D robotic maps from point cloud data is an active field of research. To handle high resolution data from terrestrial laser scanning to generate maps for mobile robots is still challenging, especially for city scale environments. In this short paper, we present the results of an approach for surface reconstruction from arbitrarily large point clouds. To achieve this, we serialize the large input data into suitable chunks, that are serialized to a shared hard drive. After computation, the partial results are fused into a globally consistent reconstruction.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130497607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper introduces robot hand which can freely perform violin fingering. The proposed robot hand consists of an index, middle, ring, and little finger, and the servo motor is placed in the palm so that each finger is driven independently. At this time, the thumb finger is fixed at a certain angle to support the violin neck. In the case of a wrist, it is 1 degree of freedom (DoF) to reach each string (E, A, D, G) for violin fingering. And each index, middle, ring, and little fingers have a force sensor in the fingertip. The four strings of the violin are composed of E, A, D, and G strings, and each string can have 28 notes in seven positions. In order to verify that the developed robot hand presses the four strings, the experiment was carried out by attaching it to the actual robot arm. As a result, the A-string of Bb-B-C-C #-D-Eb-E, are pressed in sequence, the D-string, D # -E-F-F # -G-G # -A, are pressed in sequence, the G-string G#-A-Bb-B-C-C#-D, are pressed in sequence.
本文介绍了一种可以自由演奏小提琴指法的机械手。所提出的机械手由食指、中指、无名指和小指组成,伺服电机放置在手掌中,使每个手指独立驱动。此时,拇指手指固定在一定角度以支撑琴颈。在手腕的情况下,它是1个自由度(DoF),以达到每一个弦(E, a, D, G)小提琴指法。每个食指、中指、无名指和小指的指尖上都有一个力传感器。小提琴的四根弦由E、A、D、G弦组成,每根弦在7个位置上可以有28个音符。为了验证所研制的机械手对四根弦的按压效果,将其与实际的机械手臂相连,进行了实验。结果,b- b- c - c# -D- eb - e的a弦依次被按下,D弦D -E-F-F -G-G -A依次被按下,G弦G -A- bb - b- c - c# -D依次被按下。
{"title":"Development of Robot Hand Focused on Violin Fingering","authors":"Eunha Moon, Hyeonjun Park, Donghan Kim","doi":"10.1109/IRC.2018.00072","DOIUrl":"https://doi.org/10.1109/IRC.2018.00072","url":null,"abstract":"In this paper introduces robot hand which can freely perform violin fingering. The proposed robot hand consists of an index, middle, ring, and little finger, and the servo motor is placed in the palm so that each finger is driven independently. At this time, the thumb finger is fixed at a certain angle to support the violin neck. In the case of a wrist, it is 1 degree of freedom (DoF) to reach each string (E, A, D, G) for violin fingering. And each index, middle, ring, and little fingers have a force sensor in the fingertip. The four strings of the violin are composed of E, A, D, and G strings, and each string can have 28 notes in seven positions. In order to verify that the developed robot hand presses the four strings, the experiment was carried out by attaching it to the actual robot arm. As a result, the A-string of Bb-B-C-C #-D-Eb-E, are pressed in sequence, the D-string, D # -E-F-F # -G-G # -A, are pressed in sequence, the G-string G#-A-Bb-B-C-C#-D, are pressed in sequence.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"27 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120924248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The widespread use vision systems in robotics introduces a number of challenges related to management of image acquisition and image processing tasks, as well as their coupling to the robot control function. With the proliferation of more distributed setups and flexible robotic architectures, the workflow of image acquisition needs to support a wider variety of communication styles and application scenarios. This paper presents FxIS, a flexible image acquisition service targeting distributed robotic systems with event-based communication. The principal idea a FxIS is in composition of a number of execution threads with a set of concurrent data structures, supporting acquisition from multiple cameras that is closely synchronized in time, both between the cameras and with the request timestamp.
{"title":"Flexible Image Acquisition Service for Distributed Robotic Systems","authors":"Oleksandr Semeniuta, P. Falkman","doi":"10.1109/IRC.2018.00024","DOIUrl":"https://doi.org/10.1109/IRC.2018.00024","url":null,"abstract":"The widespread use vision systems in robotics introduces a number of challenges related to management of image acquisition and image processing tasks, as well as their coupling to the robot control function. With the proliferation of more distributed setups and flexible robotic architectures, the workflow of image acquisition needs to support a wider variety of communication styles and application scenarios. This paper presents FxIS, a flexible image acquisition service targeting distributed robotic systems with event-based communication. The principal idea a FxIS is in composition of a number of execution threads with a set of concurrent data structures, supporting acquisition from multiple cameras that is closely synchronized in time, both between the cameras and with the request timestamp.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114964165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}