首页 > 最新文献

2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)最新文献

英文 中文
Fuzzy neural network based activity estimation for recording human daily activity 基于模糊神经网络的人类日常活动估计
M. Nii, Kazunobu Takahama, T. Iwamoto, Takafumi Matsuda, Yuki Matsumoto, K. Maenaka
We proposed a standard three-layer feedforward neural network based human activity estimation method. The purpose of the proposed method is to record the subject activity automatically. Here, the recorded activity includes not only actual accelerometer data but also rough description of his/her activity. In order to train the neural networks, we needed to prepare numerical datasets of accelerometer which are measured for every subject person. In this paper, we propose a fuzzy neural network based method for recording the subject activity. The proposed fuzzy neural network can handle both real and fuzzy numbers as inputs and outputs. Since the proposed method can handle fuzzy numbers, the training dataset can contain some general rules, for example, “If x and y axis accelerometer outputs are almost zero and z axis accelerometer output is equal to acceleration of gravity then the subject person is standing.”
提出了一种标准的基于三层前馈神经网络的人类活动估计方法。提出的方法的目的是自动记录受试者的活动。在这里,记录的活动不仅包括实际的加速度计数据,还包括他/她的活动的粗略描述。为了训练神经网络,我们需要准备加速度计的数值数据集,这些数据集是针对每个被试人测量的。本文提出了一种基于模糊神经网络的受试者活动记录方法。所提出的模糊神经网络可以同时处理实数和模糊数作为输入和输出。由于所提出的方法可以处理模糊数字,训练数据集可以包含一些一般规则,例如,“如果x轴和y轴加速度计的输出几乎为零,z轴加速度计的输出等于重力加速度,那么受试者是站立的。”
{"title":"Fuzzy neural network based activity estimation for recording human daily activity","authors":"M. Nii, Kazunobu Takahama, T. Iwamoto, Takafumi Matsuda, Yuki Matsumoto, K. Maenaka","doi":"10.1109/RIISS.2014.7009174","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009174","url":null,"abstract":"We proposed a standard three-layer feedforward neural network based human activity estimation method. The purpose of the proposed method is to record the subject activity automatically. Here, the recorded activity includes not only actual accelerometer data but also rough description of his/her activity. In order to train the neural networks, we needed to prepare numerical datasets of accelerometer which are measured for every subject person. In this paper, we propose a fuzzy neural network based method for recording the subject activity. The proposed fuzzy neural network can handle both real and fuzzy numbers as inputs and outputs. Since the proposed method can handle fuzzy numbers, the training dataset can contain some general rules, for example, “If x and y axis accelerometer outputs are almost zero and z axis accelerometer output is equal to acceleration of gravity then the subject person is standing.”","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"309 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122807345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unknown object extraction based on plane detection in 3D space 基于三维空间平面检测的未知目标提取
H. Masuta, Shinichiro Makino, Hun-ok Lim, T. Motoyoshi, K. Koyanagi, T. Oshima
This paper describes an unknown object extraction based on plane detection for an intelligent robot using a 3D range sensor. Previously, various methods have been proposed to perceive unknown environments. However, conventional unknown object extraction methods need predefined knowledge, and have limitations with high computational costs and low-accuracy for small object. In order to solve these problems, we propose an online processable unknown object extraction method based on 3D plane detection. To detect planes in 3D space, we have proposed a simple plane detection that applies particle swarm optimization (PSO) with region growing (RG), and integrated object plane detection. The simple plane detection is focused on small plane detection and on reducing computational costs. Furthermore, integrated object plane detection focuses on the stability of the detecting plane. Our plane detection method can detect a lot of planes in sight. This paper proposes an object extraction method which is grouped some planes according to the relative position. Through experiment, we show that unknown objects are extracted with low computational cost. Moreover, the proposed method extracts some objects in complicated environment.
介绍了一种基于平面检测的三维距离传感器智能机器人未知目标提取方法。以前,已经提出了各种方法来感知未知环境。然而,传统的未知目标提取方法需要预定义的知识,并且存在计算成本高、小目标精度低的局限性。为了解决这些问题,我们提出了一种基于三维平面检测的在线可处理未知目标提取方法。为了检测三维空间中的平面,我们提出了一种简单的平面检测方法,该方法将粒子群优化(PSO)与区域生长(RG)相结合,并结合目标平面检测。简单平面检测的重点是小平面检测和降低计算成本。此外,综合目标平面检测注重检测平面的稳定性。我们的平面检测方法可以在视线范围内检测到大量的平面。提出了一种根据相对位置对平面进行分组的目标提取方法。实验结果表明,该方法能够以较低的计算成本提取未知目标。此外,该方法还能在复杂环境中提取出部分目标。
{"title":"Unknown object extraction based on plane detection in 3D space","authors":"H. Masuta, Shinichiro Makino, Hun-ok Lim, T. Motoyoshi, K. Koyanagi, T. Oshima","doi":"10.1109/RIISS.2014.7009183","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009183","url":null,"abstract":"This paper describes an unknown object extraction based on plane detection for an intelligent robot using a 3D range sensor. Previously, various methods have been proposed to perceive unknown environments. However, conventional unknown object extraction methods need predefined knowledge, and have limitations with high computational costs and low-accuracy for small object. In order to solve these problems, we propose an online processable unknown object extraction method based on 3D plane detection. To detect planes in 3D space, we have proposed a simple plane detection that applies particle swarm optimization (PSO) with region growing (RG), and integrated object plane detection. The simple plane detection is focused on small plane detection and on reducing computational costs. Furthermore, integrated object plane detection focuses on the stability of the detecting plane. Our plane detection method can detect a lot of planes in sight. This paper proposes an object extraction method which is grouped some planes according to the relative position. Through experiment, we show that unknown objects are extracted with low computational cost. Moreover, the proposed method extracts some objects in complicated environment.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126449341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An adaptive force reflective teleoperation control method using online environment impedance estimation 基于在线环境阻抗估计的自适应力反射遥操作控制方法
Faezeh Heydari Khabbaz, A. Goldenberg, J. Drake
This paper proposes a new adaptive method for two-channel bilateral teleoperation systems control; the control method consists of adaptive force feedback and motion command scaling factors that ensure stable teleoperation with maximum achievable transparency at every moment of operation. The method is based on the integration of the real time estimation of the robot's environment impedance with the adaptive force and motion scaling factors generator. This paper formulates the adaptive scaling factors for stable teleoperation based on the impedance models of master, slave and estimated impedance of the environment. Feasibility and accuracy of an online environment impedance estimation method are analyzed through simulations and experiments. Then the proposed adaptive bilateral control method is verified through simulation studies. Results show stable interactions with maximum transparency for the simulated teleoperation system.
提出了一种新的双通道双边遥操作系统自适应控制方法;该控制方法由自适应力反馈和运动命令缩放因子组成,保证了在操作的每一刻都能实现最大的透明度和稳定的遥操作。该方法将机器人环境阻抗的实时估计与自适应力和运动比例因子生成器相结合。基于主从阻抗模型和环境估计阻抗模型,给出了稳定遥操作的自适应比例因子。通过仿真和实验分析了一种在线环境阻抗估计方法的可行性和准确性。然后通过仿真研究验证了所提出的自适应双边控制方法。结果表明,模拟遥操作系统具有稳定的相互作用和最大的透明度。
{"title":"An adaptive force reflective teleoperation control method using online environment impedance estimation","authors":"Faezeh Heydari Khabbaz, A. Goldenberg, J. Drake","doi":"10.1109/RIISS.2014.7009167","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009167","url":null,"abstract":"This paper proposes a new adaptive method for two-channel bilateral teleoperation systems control; the control method consists of adaptive force feedback and motion command scaling factors that ensure stable teleoperation with maximum achievable transparency at every moment of operation. The method is based on the integration of the real time estimation of the robot's environment impedance with the adaptive force and motion scaling factors generator. This paper formulates the adaptive scaling factors for stable teleoperation based on the impedance models of master, slave and estimated impedance of the environment. Feasibility and accuracy of an online environment impedance estimation method are analyzed through simulations and experiments. Then the proposed adaptive bilateral control method is verified through simulation studies. Results show stable interactions with maximum transparency for the simulated teleoperation system.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133436699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Spiking neural network based emotional model for robot partner 基于脉冲神经网络的机器人伴侣情感模型
János Botzheim, N. Kubota
In this paper, a spiking neural network based emotional model is proposed for a smart phone based robot partner. Since smart phone has limited computational power compared to personal computers, a simple spike response model is applied for the neurons in the neural network. The network has three layers following the concept of emotion, feeling, and mood. The perceptual input stimulates the neurons in the first, emotion layer. Weights adjustment is also proposed for the interconnected neurons in the feeling layer and between the feeling and mood layer based on Hebbian learning. Experiments are presented to validate the proposed method. Based on the emotional model, the output action such as gestural and facial expressions for the robot is calculated.
本文提出了一种基于脉冲神经网络的智能手机机器人伴侣情感模型。由于与个人电脑相比,智能手机的计算能力有限,因此对神经网络中的神经元采用简单的尖峰响应模型。这个网络有三个层次,分别是情感、感觉和情绪。知觉输入刺激第一层,即情感层的神经元。在Hebbian学习的基础上,提出了对感觉层和感觉层与情绪层之间相互连接的神经元的权值调整。实验验证了该方法的有效性。在情感模型的基础上,计算机器人的手势和面部表情等输出动作。
{"title":"Spiking neural network based emotional model for robot partner","authors":"János Botzheim, N. Kubota","doi":"10.1109/RIISS.2014.7009165","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009165","url":null,"abstract":"In this paper, a spiking neural network based emotional model is proposed for a smart phone based robot partner. Since smart phone has limited computational power compared to personal computers, a simple spike response model is applied for the neurons in the neural network. The network has three layers following the concept of emotion, feeling, and mood. The perceptual input stimulates the neurons in the first, emotion layer. Weights adjustment is also proposed for the interconnected neurons in the feeling layer and between the feeling and mood layer based on Hebbian learning. Experiments are presented to validate the proposed method. Based on the emotional model, the output action such as gestural and facial expressions for the robot is calculated.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125067057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Improvement of P-CUBE: Algorithm education tool for visually impaired persons 视障人士算法教育工具P-CUBE的改进
Shun Kakehashi, T. Motoyoshi, K. Koyanagi, T. Oshima, H. Masuta, H. Kawakami
As a method of teaching fundamental programming concepts to visually impaired persons and novice programmers, we developed the P-CUBE algorithm education tool, with which users are able to control a mobile robot simply by positioning wooden blocks on a mat. The fundamental programming concepts taught by P-CUBE consist of three elements: sequences, branches and loops. The P-CUBE system consists of a mobile robot, a program mat, programming blocks, and a personal computer (PC). The programming blocks utilize radio frequency identification (RFID) tags alone, and thus require no precision equipment such as microcomputers. Furthermore, since P-CUBE is designed to be operated via tactile information, it can be utilized by visually impaired persons. In this paper, we report on the P-CUBE system configuration and a programming workshop held for visuarlly impaired persons. We then propose P-CUBE device improvements formulated through subjective assessments obtained from workshop participants.
作为一种向视障人士和编程新手教授基本编程概念的方法,我们开发了P-CUBE算法教育工具,用户可以通过在垫子上放置木块来控制移动机器人。P-CUBE教授的基本编程概念包括三个元素:序列、分支和循环。P-CUBE系统由一个移动机器人、一个程序垫、编程块和一台个人计算机(PC)组成。编程模块单独利用射频识别(RFID)标签,因此不需要精密设备,如微型计算机。此外,由于P-CUBE设计为通过触觉信息操作,因此视障人士可以使用它。在本文中,我们报告了P-CUBE系统配置和为视障人士举办的编程研讨会。然后,我们提出了P-CUBE设备改进方案,该方案是通过从研讨会参与者那里获得的主观评估制定的。
{"title":"Improvement of P-CUBE: Algorithm education tool for visually impaired persons","authors":"Shun Kakehashi, T. Motoyoshi, K. Koyanagi, T. Oshima, H. Masuta, H. Kawakami","doi":"10.1109/RIISS.2014.7009180","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009180","url":null,"abstract":"As a method of teaching fundamental programming concepts to visually impaired persons and novice programmers, we developed the P-CUBE algorithm education tool, with which users are able to control a mobile robot simply by positioning wooden blocks on a mat. The fundamental programming concepts taught by P-CUBE consist of three elements: sequences, branches and loops. The P-CUBE system consists of a mobile robot, a program mat, programming blocks, and a personal computer (PC). The programming blocks utilize radio frequency identification (RFID) tags alone, and thus require no precision equipment such as microcomputers. Furthermore, since P-CUBE is designed to be operated via tactile information, it can be utilized by visually impaired persons. In this paper, we report on the P-CUBE system configuration and a programming workshop held for visuarlly impaired persons. We then propose P-CUBE device improvements formulated through subjective assessments obtained from workshop participants.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121728838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Affective communication robot partners using associative memory with mood congruency effects 情感交流机器人伙伴使用具有情绪一致性效应的联想记忆
Naoki Masuyama, Md. Nazrul Islam, C. Loo
Associative memory is one of the significant and effective functions in communication. Conventionally, several types of artificial associative memory models have been de-veloped. In the field of psychology, it is known that human memory and emotions are closely related each other, such as the mood-congruency effects. In addition, emotions are sensitive to sympathy for facial expressions of communication partners. In this paper, we develop the emotional models for the robot partners, and propose an interactive robot system with a complex-valued bidirectional associative memory model that associations are affected by emotional factors. We utilize multi-modal information such as gesture and facial expressions to generate emotional factors. The results of interactive communication experiment show that there is a possibility to provide the suitable information for the interactive space.
联想记忆是交际中重要而有效的功能之一。传统上,已经开发了几种类型的人工联想记忆模型。在心理学领域,我们知道人类的记忆和情绪是密切相关的,比如情绪一致性效应。此外,情绪对交流伙伴面部表情的同情也很敏感。在本文中,我们建立了机器人伙伴的情感模型,并提出了一个具有复杂值双向联想记忆模型的交互式机器人系统,该模型受情感因素的影响。我们利用手势和面部表情等多模态信息来产生情感因素。交互通信实验结果表明,为交互空间提供合适的信息是可能的。
{"title":"Affective communication robot partners using associative memory with mood congruency effects","authors":"Naoki Masuyama, Md. Nazrul Islam, C. Loo","doi":"10.1109/RIISS.2014.7009178","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009178","url":null,"abstract":"Associative memory is one of the significant and effective functions in communication. Conventionally, several types of artificial associative memory models have been de-veloped. In the field of psychology, it is known that human memory and emotions are closely related each other, such as the mood-congruency effects. In addition, emotions are sensitive to sympathy for facial expressions of communication partners. In this paper, we develop the emotional models for the robot partners, and propose an interactive robot system with a complex-valued bidirectional associative memory model that associations are affected by emotional factors. We utilize multi-modal information such as gesture and facial expressions to generate emotional factors. The results of interactive communication experiment show that there is a possibility to provide the suitable information for the interactive space.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124498059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Slip based pick-and-place by universal robot hand with force/torque sensors 基于滑动的万能机械手拾取和放置,带有力/扭矩传感器
F. Kobayashi, Hayato Kanno, Hiroyuki Nakamoto, F. Kojima
A multi-fingered robot hand receives much attention in various fields. We have developed the multi-fingered robot hand with the multi-axis force/torque sensors. For stable transportation, the robot hand must pick up an object without dropping it and places it without damaging it. This paper deals with a pick-and-place motion by the developed robot hand. In this motion, the robot hand detects a slip by using the multi-axis force/torque sensors and implements the pick-and-place motion according to the detected slip. The effectiveness of the proposed grasp selection is verified through some experiments with the universal robot hand.
多指机械手在各个领域受到广泛关注。我们开发了带有多轴力/扭矩传感器的多指机械手。为了稳定的运输,机器人手必须拿起一个物体而不掉落,并在不损坏它的情况下放置它。本文研究了开发的机器人手的拾取动作。在该运动中,机械手利用多轴力/扭矩传感器检测到滑移,并根据检测到的滑移实现拾取运动。通过通用机械手的实验验证了所提抓取选择的有效性。
{"title":"Slip based pick-and-place by universal robot hand with force/torque sensors","authors":"F. Kobayashi, Hayato Kanno, Hiroyuki Nakamoto, F. Kojima","doi":"10.1109/RIISS.2014.7009185","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009185","url":null,"abstract":"A multi-fingered robot hand receives much attention in various fields. We have developed the multi-fingered robot hand with the multi-axis force/torque sensors. For stable transportation, the robot hand must pick up an object without dropping it and places it without damaging it. This paper deals with a pick-and-place motion by the developed robot hand. In this motion, the robot hand detects a slip by using the multi-axis force/torque sensors and implements the pick-and-place motion according to the detected slip. The effectiveness of the proposed grasp selection is verified through some experiments with the universal robot hand.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116474308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Behavior pattern learning for robot partner based on growing neural networks in informationally structured space 信息结构空间中基于生长神经网络的机器人伙伴行为模式学习
T. Obo, N. Kubota
In this paper, we focus on human behavior estimation for human-robot interaction. Human behavior recognition is one of the most important techniques, because bodily expressions convey important and effective information for robots. This paper proposes a learning structure composed of two learning modules for feature extraction and contextual relation modeling, using Growing Neural Gas (GNG) and Spiking Neural Network (SNN). GNG is applied to the feature extraction of human behavior, and SNN is used to associate the features with verbal labels that robots can get through human-robot interaction. Furthermore, we show an experimental result, and discuss effectiveness of the proposed method.
本文主要研究人机交互中人类行为的估计问题。人类行为识别是最重要的技术之一,因为身体表情传递了重要而有效的信息。本文提出了一种基于生长神经气体(GNG)和峰值神经网络(SNN)的特征提取和上下文关系建模两个学习模块组成的学习结构。GNG用于人类行为的特征提取,SNN用于将特征与机器人通过人机交互获得的语言标签相关联。最后给出了实验结果,并讨论了该方法的有效性。
{"title":"Behavior pattern learning for robot partner based on growing neural networks in informationally structured space","authors":"T. Obo, N. Kubota","doi":"10.1109/RIISS.2014.7009175","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009175","url":null,"abstract":"In this paper, we focus on human behavior estimation for human-robot interaction. Human behavior recognition is one of the most important techniques, because bodily expressions convey important and effective information for robots. This paper proposes a learning structure composed of two learning modules for feature extraction and contextual relation modeling, using Growing Neural Gas (GNG) and Spiking Neural Network (SNN). GNG is applied to the feature extraction of human behavior, and SNN is used to associate the features with verbal labels that robots can get through human-robot interaction. Furthermore, we show an experimental result, and discuss effectiveness of the proposed method.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"18 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130432805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Growing neural gas based conversation selection model for robot partner and human communication system 基于神经气体的机器人伴侣与人类交流系统对话选择模型的增长
Shogo Yoshida, N. Kubota
Elderly people with socially isolated has become an important problem in Japan. Therefore, the introduction robot partner for supporting socially isolated elderly people's life become of the solutions. This paper discusses conversation selection model using Growing Neural Gas(GNG). The robot partner is composed of a smart device used as a face module and the robot body module with two arms. First we discuss the necessity of robot partner in conjunction with elderly people life support, while we also discuss the connection between conversation selection model and robot partner's communication ability performance. Next, we propose conversation selection model using GNG for determining robot partner's utterance from voice recognition result. We conduct experiments to discuss the effectiveness of the proposed method based on GNG and JS divergence. Finally, we show the robot partner's capability in selecting words while performing conversation using the proposed method.
老年社会孤立已成为日本的一个重要问题。因此,引入机器人伴侣来支持社会孤立老年人的生活成为解决方案之一。本文讨论了基于生长神经气体(GNG)的会话选择模型。机器人伙伴由智能设备作为面部模块和具有两条手臂的机器人身体模块组成。我们首先结合老年人的生活支持讨论了机器人伴侣的必要性,同时讨论了对话选择模型与机器人伴侣的沟通能力表现之间的联系。接下来,我们提出了基于GNG的对话选择模型,从语音识别结果中确定机器人伙伴的话语。我们通过实验来讨论基于GNG和JS散度的方法的有效性。最后,我们展示了机器人搭档在使用所提出的方法进行对话时选择单词的能力。
{"title":"Growing neural gas based conversation selection model for robot partner and human communication system","authors":"Shogo Yoshida, N. Kubota","doi":"10.1109/RIISS.2014.7009166","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009166","url":null,"abstract":"Elderly people with socially isolated has become an important problem in Japan. Therefore, the introduction robot partner for supporting socially isolated elderly people's life become of the solutions. This paper discusses conversation selection model using Growing Neural Gas(GNG). The robot partner is composed of a smart device used as a face module and the robot body module with two arms. First we discuss the necessity of robot partner in conjunction with elderly people life support, while we also discuss the connection between conversation selection model and robot partner's communication ability performance. Next, we propose conversation selection model using GNG for determining robot partner's utterance from voice recognition result. We conduct experiments to discuss the effectiveness of the proposed method based on GNG and JS divergence. Finally, we show the robot partner's capability in selecting words while performing conversation using the proposed method.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127113295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-robots coverage approach 多机器人覆盖方法
R. Chellali, K. Baizid
In this paper we present a full and effective system allowing the deployment of N semi-autonomous robots in order to cover a given area for video surveillance and search purposes. The coverage problem is solved through a new technique based on the exploitation of Voronoi tessellations. To supervise a given area, a set of viewpoints are extracted, then visited by a group of mobile rover. Robots paths are calculated by resorting a salesman problem through Multi-objective Genetic Algorithms. In the running phase, robots deal with both motion and sensors uncertainties while performing the pre-established paths. Results of indoor scenario are given.
在本文中,我们提出了一个完整而有效的系统,允许部署N个半自主机器人,以覆盖给定区域进行视频监控和搜索。利用Voronoi镶嵌的新技术解决了覆盖问题。为了监督一个给定的区域,一组视点被提取出来,然后由一组移动漫游者访问。利用多目标遗传算法求解机器人路径问题。在运行阶段,机器人在执行预先设定的路径时同时处理运动和传感器的不确定性。给出了室内场景的结果。
{"title":"Multi-robots coverage approach","authors":"R. Chellali, K. Baizid","doi":"10.1109/RIISS.2014.7009171","DOIUrl":"https://doi.org/10.1109/RIISS.2014.7009171","url":null,"abstract":"In this paper we present a full and effective system allowing the deployment of N semi-autonomous robots in order to cover a given area for video surveillance and search purposes. The coverage problem is solved through a new technique based on the exploitation of Voronoi tessellations. To supervise a given area, a set of viewpoints are extracted, then visited by a group of mobile rover. Robots paths are calculated by resorting a salesman problem through Multi-objective Genetic Algorithms. In the running phase, robots deal with both motion and sensors uncertainties while performing the pre-established paths. Results of indoor scenario are given.","PeriodicalId":270157,"journal":{"name":"2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128289227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1