Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009174
M. Nii, Kazunobu Takahama, T. Iwamoto, Takafumi Matsuda, Yuki Matsumoto, K. Maenaka
We proposed a standard three-layer feedforward neural network based human activity estimation method. The purpose of the proposed method is to record the subject activity automatically. Here, the recorded activity includes not only actual accelerometer data but also rough description of his/her activity. In order to train the neural networks, we needed to prepare numerical datasets of accelerometer which are measured for every subject person. In this paper, we propose a fuzzy neural network based method for recording the subject activity. The proposed fuzzy neural network can handle both real and fuzzy numbers as inputs and outputs. Since the proposed method can handle fuzzy numbers, the training dataset can contain some general rules, for example, “If x and y axis accelerometer outputs are almost zero and z axis accelerometer output is equal to acceleration of gravity then the subject person is standing.”
{"title":"Fuzzy neural network based activity estimation for recording human daily activity","authors":"M. Nii, Kazunobu Takahama, T. Iwamoto, Takafumi Matsuda, Yuki Matsumoto, K. Maenaka","doi":"10.1109/RIISS.2014.7009174","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009174","url":null,"abstract":"We proposed a standard three-layer feedforward neural network based human activity estimation method. The purpose of the proposed method is to record the subject activity automatically. Here, the recorded activity includes not only actual accelerometer data but also rough description of his/her activity. In order to train the neural networks, we needed to prepare numerical datasets of accelerometer which are measured for every subject person. In this paper, we propose a fuzzy neural network based method for recording the subject activity. The proposed fuzzy neural network can handle both real and fuzzy numbers as inputs and outputs. Since the proposed method can handle fuzzy numbers, the training dataset can contain some general rules, for example, “If x and y axis accelerometer outputs are almost zero and z axis accelerometer output is equal to acceleration of gravity then the subject person is standing.”","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"309 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122807345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009183
H. Masuta, Shinichiro Makino, Hun-ok Lim, T. Motoyoshi, K. Koyanagi, T. Oshima
This paper describes an unknown object extraction based on plane detection for an intelligent robot using a 3D range sensor. Previously, various methods have been proposed to perceive unknown environments. However, conventional unknown object extraction methods need predefined knowledge, and have limitations with high computational costs and low-accuracy for small object. In order to solve these problems, we propose an online processable unknown object extraction method based on 3D plane detection. To detect planes in 3D space, we have proposed a simple plane detection that applies particle swarm optimization (PSO) with region growing (RG), and integrated object plane detection. The simple plane detection is focused on small plane detection and on reducing computational costs. Furthermore, integrated object plane detection focuses on the stability of the detecting plane. Our plane detection method can detect a lot of planes in sight. This paper proposes an object extraction method which is grouped some planes according to the relative position. Through experiment, we show that unknown objects are extracted with low computational cost. Moreover, the proposed method extracts some objects in complicated environment.
{"title":"Unknown object extraction based on plane detection in 3D space","authors":"H. Masuta, Shinichiro Makino, Hun-ok Lim, T. Motoyoshi, K. Koyanagi, T. Oshima","doi":"10.1109/RIISS.2014.7009183","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009183","url":null,"abstract":"This paper describes an unknown object extraction based on plane detection for an intelligent robot using a 3D range sensor. Previously, various methods have been proposed to perceive unknown environments. However, conventional unknown object extraction methods need predefined knowledge, and have limitations with high computational costs and low-accuracy for small object. In order to solve these problems, we propose an online processable unknown object extraction method based on 3D plane detection. To detect planes in 3D space, we have proposed a simple plane detection that applies particle swarm optimization (PSO) with region growing (RG), and integrated object plane detection. The simple plane detection is focused on small plane detection and on reducing computational costs. Furthermore, integrated object plane detection focuses on the stability of the detecting plane. Our plane detection method can detect a lot of planes in sight. This paper proposes an object extraction method which is grouped some planes according to the relative position. Through experiment, we show that unknown objects are extracted with low computational cost. Moreover, the proposed method extracts some objects in complicated environment.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126449341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009167
Faezeh Heydari Khabbaz, A. Goldenberg, J. Drake
This paper proposes a new adaptive method for two-channel bilateral teleoperation systems control; the control method consists of adaptive force feedback and motion command scaling factors that ensure stable teleoperation with maximum achievable transparency at every moment of operation. The method is based on the integration of the real time estimation of the robot's environment impedance with the adaptive force and motion scaling factors generator. This paper formulates the adaptive scaling factors for stable teleoperation based on the impedance models of master, slave and estimated impedance of the environment. Feasibility and accuracy of an online environment impedance estimation method are analyzed through simulations and experiments. Then the proposed adaptive bilateral control method is verified through simulation studies. Results show stable interactions with maximum transparency for the simulated teleoperation system.
{"title":"An adaptive force reflective teleoperation control method using online environment impedance estimation","authors":"Faezeh Heydari Khabbaz, A. Goldenberg, J. Drake","doi":"10.1109/RIISS.2014.7009167","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009167","url":null,"abstract":"This paper proposes a new adaptive method for two-channel bilateral teleoperation systems control; the control method consists of adaptive force feedback and motion command scaling factors that ensure stable teleoperation with maximum achievable transparency at every moment of operation. The method is based on the integration of the real time estimation of the robot's environment impedance with the adaptive force and motion scaling factors generator. This paper formulates the adaptive scaling factors for stable teleoperation based on the impedance models of master, slave and estimated impedance of the environment. Feasibility and accuracy of an online environment impedance estimation method are analyzed through simulations and experiments. Then the proposed adaptive bilateral control method is verified through simulation studies. Results show stable interactions with maximum transparency for the simulated teleoperation system.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133436699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009165
János Botzheim, N. Kubota
In this paper, a spiking neural network based emotional model is proposed for a smart phone based robot partner. Since smart phone has limited computational power compared to personal computers, a simple spike response model is applied for the neurons in the neural network. The network has three layers following the concept of emotion, feeling, and mood. The perceptual input stimulates the neurons in the first, emotion layer. Weights adjustment is also proposed for the interconnected neurons in the feeling layer and between the feeling and mood layer based on Hebbian learning. Experiments are presented to validate the proposed method. Based on the emotional model, the output action such as gestural and facial expressions for the robot is calculated.
{"title":"Spiking neural network based emotional model for robot partner","authors":"János Botzheim, N. Kubota","doi":"10.1109/RIISS.2014.7009165","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009165","url":null,"abstract":"In this paper, a spiking neural network based emotional model is proposed for a smart phone based robot partner. Since smart phone has limited computational power compared to personal computers, a simple spike response model is applied for the neurons in the neural network. The network has three layers following the concept of emotion, feeling, and mood. The perceptual input stimulates the neurons in the first, emotion layer. Weights adjustment is also proposed for the interconnected neurons in the feeling layer and between the feeling and mood layer based on Hebbian learning. Experiments are presented to validate the proposed method. Based on the emotional model, the output action such as gestural and facial expressions for the robot is calculated.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125067057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009180
Shun Kakehashi, T. Motoyoshi, K. Koyanagi, T. Oshima, H. Masuta, H. Kawakami
As a method of teaching fundamental programming concepts to visually impaired persons and novice programmers, we developed the P-CUBE algorithm education tool, with which users are able to control a mobile robot simply by positioning wooden blocks on a mat. The fundamental programming concepts taught by P-CUBE consist of three elements: sequences, branches and loops. The P-CUBE system consists of a mobile robot, a program mat, programming blocks, and a personal computer (PC). The programming blocks utilize radio frequency identification (RFID) tags alone, and thus require no precision equipment such as microcomputers. Furthermore, since P-CUBE is designed to be operated via tactile information, it can be utilized by visually impaired persons. In this paper, we report on the P-CUBE system configuration and a programming workshop held for visuarlly impaired persons. We then propose P-CUBE device improvements formulated through subjective assessments obtained from workshop participants.
{"title":"Improvement of P-CUBE: Algorithm education tool for visually impaired persons","authors":"Shun Kakehashi, T. Motoyoshi, K. Koyanagi, T. Oshima, H. Masuta, H. Kawakami","doi":"10.1109/RIISS.2014.7009180","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009180","url":null,"abstract":"As a method of teaching fundamental programming concepts to visually impaired persons and novice programmers, we developed the P-CUBE algorithm education tool, with which users are able to control a mobile robot simply by positioning wooden blocks on a mat. The fundamental programming concepts taught by P-CUBE consist of three elements: sequences, branches and loops. The P-CUBE system consists of a mobile robot, a program mat, programming blocks, and a personal computer (PC). The programming blocks utilize radio frequency identification (RFID) tags alone, and thus require no precision equipment such as microcomputers. Furthermore, since P-CUBE is designed to be operated via tactile information, it can be utilized by visually impaired persons. In this paper, we report on the P-CUBE system configuration and a programming workshop held for visuarlly impaired persons. We then propose P-CUBE device improvements formulated through subjective assessments obtained from workshop participants.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121728838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009178
Naoki Masuyama, Md. Nazrul Islam, C. Loo
Associative memory is one of the significant and effective functions in communication. Conventionally, several types of artificial associative memory models have been de-veloped. In the field of psychology, it is known that human memory and emotions are closely related each other, such as the mood-congruency effects. In addition, emotions are sensitive to sympathy for facial expressions of communication partners. In this paper, we develop the emotional models for the robot partners, and propose an interactive robot system with a complex-valued bidirectional associative memory model that associations are affected by emotional factors. We utilize multi-modal information such as gesture and facial expressions to generate emotional factors. The results of interactive communication experiment show that there is a possibility to provide the suitable information for the interactive space.
{"title":"Affective communication robot partners using associative memory with mood congruency effects","authors":"Naoki Masuyama, Md. Nazrul Islam, C. Loo","doi":"10.1109/RIISS.2014.7009178","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009178","url":null,"abstract":"Associative memory is one of the significant and effective functions in communication. Conventionally, several types of artificial associative memory models have been de-veloped. In the field of psychology, it is known that human memory and emotions are closely related each other, such as the mood-congruency effects. In addition, emotions are sensitive to sympathy for facial expressions of communication partners. In this paper, we develop the emotional models for the robot partners, and propose an interactive robot system with a complex-valued bidirectional associative memory model that associations are affected by emotional factors. We utilize multi-modal information such as gesture and facial expressions to generate emotional factors. The results of interactive communication experiment show that there is a possibility to provide the suitable information for the interactive space.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124498059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009185
F. Kobayashi, Hayato Kanno, Hiroyuki Nakamoto, F. Kojima
A multi-fingered robot hand receives much attention in various fields. We have developed the multi-fingered robot hand with the multi-axis force/torque sensors. For stable transportation, the robot hand must pick up an object without dropping it and places it without damaging it. This paper deals with a pick-and-place motion by the developed robot hand. In this motion, the robot hand detects a slip by using the multi-axis force/torque sensors and implements the pick-and-place motion according to the detected slip. The effectiveness of the proposed grasp selection is verified through some experiments with the universal robot hand.
{"title":"Slip based pick-and-place by universal robot hand with force/torque sensors","authors":"F. Kobayashi, Hayato Kanno, Hiroyuki Nakamoto, F. Kojima","doi":"10.1109/RIISS.2014.7009185","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009185","url":null,"abstract":"A multi-fingered robot hand receives much attention in various fields. We have developed the multi-fingered robot hand with the multi-axis force/torque sensors. For stable transportation, the robot hand must pick up an object without dropping it and places it without damaging it. This paper deals with a pick-and-place motion by the developed robot hand. In this motion, the robot hand detects a slip by using the multi-axis force/torque sensors and implements the pick-and-place motion according to the detected slip. The effectiveness of the proposed grasp selection is verified through some experiments with the universal robot hand.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116474308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009175
T. Obo, N. Kubota
In this paper, we focus on human behavior estimation for human-robot interaction. Human behavior recognition is one of the most important techniques, because bodily expressions convey important and effective information for robots. This paper proposes a learning structure composed of two learning modules for feature extraction and contextual relation modeling, using Growing Neural Gas (GNG) and Spiking Neural Network (SNN). GNG is applied to the feature extraction of human behavior, and SNN is used to associate the features with verbal labels that robots can get through human-robot interaction. Furthermore, we show an experimental result, and discuss effectiveness of the proposed method.
{"title":"Behavior pattern learning for robot partner based on growing neural networks in informationally structured space","authors":"T. Obo, N. Kubota","doi":"10.1109/RIISS.2014.7009175","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009175","url":null,"abstract":"In this paper, we focus on human behavior estimation for human-robot interaction. Human behavior recognition is one of the most important techniques, because bodily expressions convey important and effective information for robots. This paper proposes a learning structure composed of two learning modules for feature extraction and contextual relation modeling, using Growing Neural Gas (GNG) and Spiking Neural Network (SNN). GNG is applied to the feature extraction of human behavior, and SNN is used to associate the features with verbal labels that robots can get through human-robot interaction. Furthermore, we show an experimental result, and discuss effectiveness of the proposed method.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"18 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130432805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009166
Shogo Yoshida, N. Kubota
Elderly people with socially isolated has become an important problem in Japan. Therefore, the introduction robot partner for supporting socially isolated elderly people's life become of the solutions. This paper discusses conversation selection model using Growing Neural Gas(GNG). The robot partner is composed of a smart device used as a face module and the robot body module with two arms. First we discuss the necessity of robot partner in conjunction with elderly people life support, while we also discuss the connection between conversation selection model and robot partner's communication ability performance. Next, we propose conversation selection model using GNG for determining robot partner's utterance from voice recognition result. We conduct experiments to discuss the effectiveness of the proposed method based on GNG and JS divergence. Finally, we show the robot partner's capability in selecting words while performing conversation using the proposed method.
{"title":"Growing neural gas based conversation selection model for robot partner and human communication system","authors":"Shogo Yoshida, N. Kubota","doi":"10.1109/RIISS.2014.7009166","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009166","url":null,"abstract":"Elderly people with socially isolated has become an important problem in Japan. Therefore, the introduction robot partner for supporting socially isolated elderly people's life become of the solutions. This paper discusses conversation selection model using Growing Neural Gas(GNG). The robot partner is composed of a smart device used as a face module and the robot body module with two arms. First we discuss the necessity of robot partner in conjunction with elderly people life support, while we also discuss the connection between conversation selection model and robot partner's communication ability performance. Next, we propose conversation selection model using GNG for determining robot partner's utterance from voice recognition result. We conduct experiments to discuss the effectiveness of the proposed method based on GNG and JS divergence. Finally, we show the robot partner's capability in selecting words while performing conversation using the proposed method.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127113295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/RIISS.2014.7009171
R. Chellali, K. Baizid
In this paper we present a full and effective system allowing the deployment of N semi-autonomous robots in order to cover a given area for video surveillance and search purposes. The coverage problem is solved through a new technique based on the exploitation of Voronoi tessellations. To supervise a given area, a set of viewpoints are extracted, then visited by a group of mobile rover. Robots paths are calculated by resorting a salesman problem through Multi-objective Genetic Algorithms. In the running phase, robots deal with both motion and sensors uncertainties while performing the pre-established paths. Results of indoor scenario are given.
{"title":"Multi-robots coverage approach","authors":"R. Chellali, K. Baizid","doi":"10.1109/RIISS.2014.7009171","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009171","url":null,"abstract":"In this paper we present a full and effective system allowing the deployment of N semi-autonomous robots in order to cover a given area for video surveillance and search purposes. The coverage problem is solved through a new technique based on the exploitation of Voronoi tessellations. To supervise a given area, a set of viewpoints are extracted, then visited by a group of mobile rover. Robots paths are calculated by resorting a salesman problem through Multi-objective Genetic Algorithms. In the running phase, robots deal with both motion and sensors uncertainties while performing the pre-established paths. Results of indoor scenario are given.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128289227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}