首页 > 最新文献

2010 IEEE Workshop on Advanced Robotics and its Social Impacts最新文献

英文 中文
Robot's behavior expressions according to the sentence types and emotions with modification by personality 机器人的行为表现是根据句子类型和情感进行的,并有人格的修饰
Pub Date : 2010-10-26 DOI: 10.1109/ARSO.2010.5680043
Jong-Chan Park, Hyunsoo Song, S. Koo, Young-Min Kim, D. Kwon
Expression has become one of important parts in human-robot interaction as an intuitive communication channel between humans and robots. However it is very difficult to construct robot's behaviors one by one. Developers consider how to make various motions of the robot easily. Therefore we propose an useful behavior expression method according to the sentence types and emotions. In this paper, robots express behaviors using motion sets of multi-modalities described as a combination of sentence types and emotions. In order to gather the data of multi-modal motion sets, we used video analysis of the actress for human modalities and did user-test for non-human modalities. We developed a behavior edit-toolkit to make and modify robot's behaviors easily. And also we proposed stereotyped actions according to the robot's personality for diversifying behavior expressions. Defined 25 behaviors based on the sentence types and emotions are applied to Silbot, a test-bed robot in CIR of Korea, and used for the English education.
表达作为人与机器人之间直观的沟通渠道,已成为人机交互的重要组成部分之一。然而,逐个构建机器人的行为是非常困难的。开发人员考虑如何使机器人的各种动作容易。因此,我们根据句子类型和情感提出了一种有用的行为表达方法。在本文中,机器人使用多模态的动作集来表达行为,这些动作集被描述为句子类型和情感的组合。为了收集多模态运动集的数据,我们对女演员进行了人类模态的视频分析,并对非人类模态进行了用户测试。我们开发了一个行为编辑工具包,可以轻松地制作和修改机器人的行为。并根据机器人的个性提出刻板动作,使行为表达多样化。根据句子类型和情绪定义的25种行为被应用到韩国CIR的实验机器人Silbot上,并用于英语教育。
{"title":"Robot's behavior expressions according to the sentence types and emotions with modification by personality","authors":"Jong-Chan Park, Hyunsoo Song, S. Koo, Young-Min Kim, D. Kwon","doi":"10.1109/ARSO.2010.5680043","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5680043","url":null,"abstract":"Expression has become one of important parts in human-robot interaction as an intuitive communication channel between humans and robots. However it is very difficult to construct robot's behaviors one by one. Developers consider how to make various motions of the robot easily. Therefore we propose an useful behavior expression method according to the sentence types and emotions. In this paper, robots express behaviors using motion sets of multi-modalities described as a combination of sentence types and emotions. In order to gather the data of multi-modal motion sets, we used video analysis of the actress for human modalities and did user-test for non-human modalities. We developed a behavior edit-toolkit to make and modify robot's behaviors easily. And also we proposed stereotyped actions according to the robot's personality for diversifying behavior expressions. Defined 25 behaviors based on the sentence types and emotions are applied to Silbot, a test-bed robot in CIR of Korea, and used for the English education.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129087450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A ubiquitous Smart Parenting and Customized Education service robot 一个无处不在的智能育儿和定制教育服务机器人
Pub Date : 2010-10-01 DOI: 10.1109/ARSO.2010.5679634
Ho-Joon Lee, Jong C. Park
In this paper, we introduce a u-SPACE service robot, designed to help children who may be left alone while their caregivers are away from home. In order to protect children from indoor dangers, this service robot provides customized guiding messages taking into account the location information and behavioral patterns of a child, after the detection of dangerous objects and situations. And these guiding messages are vocalized by our emotional speech generation system. This emotional speech generation system is also being put to use in reading fairy tales to a child, as a part of a home education service. The outward appearance of the u-SPACE service robot is modeled on a teddy bear, in order to provide a safe and comforting environment for children. Two touch sensors designed for basic interactions between a child and the robot are installed on each hand of the robot, and an RFID tag is placed inside the body. A PDA with a Wi-Fi communication module, a touch screen, and a speaker is used as a main operating device of this u-SPACE service robot.
在本文中,我们介绍了一个u-SPACE服务机器人,旨在帮助那些可能独自留下的孩子,而他们的照顾者不在家。为了保护儿童免受室内危险,该服务机器人在检测到危险物体和情况后,根据儿童的位置信息和行为模式提供定制的引导信息。这些指导性信息是由我们的情感语言生成系统发出的。作为家庭教育服务的一部分,这种情感语言生成系统也被用于给孩子读童话故事。u-SPACE服务机器人的外观以泰迪熊为原型,为孩子们提供一个安全舒适的环境。机器人的每只手上都安装了两个触摸传感器,用于儿童和机器人之间的基本互动,机器人体内还安装了一个RFID标签。u-SPACE服务机器人的主要操作设备是带有Wi-Fi通信模块、触摸屏和扬声器的PDA。
{"title":"A ubiquitous Smart Parenting and Customized Education service robot","authors":"Ho-Joon Lee, Jong C. Park","doi":"10.1109/ARSO.2010.5679634","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679634","url":null,"abstract":"In this paper, we introduce a u-SPACE service robot, designed to help children who may be left alone while their caregivers are away from home. In order to protect children from indoor dangers, this service robot provides customized guiding messages taking into account the location information and behavioral patterns of a child, after the detection of dangerous objects and situations. And these guiding messages are vocalized by our emotional speech generation system. This emotional speech generation system is also being put to use in reading fairy tales to a child, as a part of a home education service. The outward appearance of the u-SPACE service robot is modeled on a teddy bear, in order to provide a safe and comforting environment for children. Two touch sensors designed for basic interactions between a child and the robot are installed on each hand of the robot, and an RFID tag is placed inside the body. A PDA with a Wi-Fi communication module, a touch screen, and a speaker is used as a main operating device of this u-SPACE service robot.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128390989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
uRON v1.5: A device-independent and reconfigurable robot navigation library uRON v1.5:一个与设备无关且可重构的机器人导航库
Pub Date : 2010-10-01 DOI: 10.1109/ARSO.2010.5679696
Sunglok Choi, Jae-Y. Lee, Wonpil Yu
Many laboratories and companies are developing a mobile robot with various sensors and actuators. They implement navigation techniques usually tailored to their own robot. In this paper, we introduce a novel robot navigation library, Universal Robot Navigation (uRON). uRON is designed to be portable and independent from robot hardware and operating systems. Users can apply uRON to their robots with small amounts of codes. Moreover, uRON provides reusable navigation components and reconfigurable navigation framework. It contains the navigation components such as localization, path planning, path following, and obstacle avoidance. Users can create their own component using the existing ones. uRON also includes the navigation framework which assembles each component and wraps them as high-level functions. Users can achieve their robot service easily and quickly with this framework. We applied uRON to three service robots in Tomorrow City, Incheon, South Korea. Three robots had different hardwares and performed different services. uRON enables three robots movable and satisfies complex service requirements with less than 500 lines of codes.
许多实验室和公司正在开发具有各种传感器和执行器的移动机器人。他们采用的导航技术通常是为他们自己的机器人量身定制的。本文介绍了一种新型机器人导航库——通用机器人导航库(Universal robot navigation, uRON)。uRON的设计是便携的,独立于机器人硬件和操作系统。用户可以通过少量代码将uRON应用到他们的机器人上。uRON还提供了可重用的导航组件和可重构的导航框架。它包含定位、路径规划、路径跟踪和避障等导航组件。用户可以使用现有组件创建自己的组件。uRON还包括导航框架,它组装每个组件并将它们包装为高级功能。使用该框架,用户可以方便快捷地实现自己的机器人服务。我们将uRON应用于韩国仁川明日城的三个服务机器人。三个机器人有不同的硬件和执行不同的服务。uRON使三个机器人可移动,并满足复杂的服务要求,只需不到500行代码。
{"title":"uRON v1.5: A device-independent and reconfigurable robot navigation library","authors":"Sunglok Choi, Jae-Y. Lee, Wonpil Yu","doi":"10.1109/ARSO.2010.5679696","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679696","url":null,"abstract":"Many laboratories and companies are developing a mobile robot with various sensors and actuators. They implement navigation techniques usually tailored to their own robot. In this paper, we introduce a novel robot navigation library, Universal Robot Navigation (uRON). uRON is designed to be portable and independent from robot hardware and operating systems. Users can apply uRON to their robots with small amounts of codes. Moreover, uRON provides reusable navigation components and reconfigurable navigation framework. It contains the navigation components such as localization, path planning, path following, and obstacle avoidance. Users can create their own component using the existing ones. uRON also includes the navigation framework which assembles each component and wraps them as high-level functions. Users can achieve their robot service easily and quickly with this framework. We applied uRON to three service robots in Tomorrow City, Incheon, South Korea. Three robots had different hardwares and performed different services. uRON enables three robots movable and satisfies complex service requirements with less than 500 lines of codes.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124344409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Scene space inference based on stereo vision 基于立体视觉的场景空间推理
Pub Date : 2010-10-01 DOI: 10.1109/ARSO.2010.5680017
K. Lin, Han-Pang Huang, Sheng-Yen Lo, Chun-Hung Huang
This paper provides an intuitive way to inference the space of a scene using stereo cameras. We first segmented the ground out of the image by adaptively learning the ground model in the image. We then used the convex hull to approximate the scene space. Objects within the scene can also be detected with the stereo cameras. Finally, we organized the scene space and the objects within the scene into a graphical model, and then used particle filters to approximate the solution. Experiments were conducted to test the accuracy of the ground segmentation and the precision and recall of object detection within the scene. The precision and recall of object detection was about 50% in our system. With additional tracking of the object, the recall could improve approximately 5%. The result can be considered as prior knowledge for further image tasks, e.g. obstacle avoidance or object recognition.
本文提供了一种直观的方法来推断使用立体摄像机的场景空间。我们首先通过自适应学习图像中的地面模型,将地面从图像中分割出来。然后我们使用凸包来近似场景空间。场景中的物体也可以用立体摄像机检测到。最后,我们将场景空间和场景中的物体组织成一个图形模型,然后使用粒子滤波来近似解。通过实验测试了地面分割的准确性以及场景中目标检测的精度和召回率。在我们的系统中,目标检测的准确率和召回率约为50%。通过对目标的额外跟踪,召回率可以提高大约5%。结果可以被认为是进一步图像任务的先验知识,例如避障或物体识别。
{"title":"Scene space inference based on stereo vision","authors":"K. Lin, Han-Pang Huang, Sheng-Yen Lo, Chun-Hung Huang","doi":"10.1109/ARSO.2010.5680017","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5680017","url":null,"abstract":"This paper provides an intuitive way to inference the space of a scene using stereo cameras. We first segmented the ground out of the image by adaptively learning the ground model in the image. We then used the convex hull to approximate the scene space. Objects within the scene can also be detected with the stereo cameras. Finally, we organized the scene space and the objects within the scene into a graphical model, and then used particle filters to approximate the solution. Experiments were conducted to test the accuracy of the ground segmentation and the precision and recall of object detection within the scene. The precision and recall of object detection was about 50% in our system. With additional tracking of the object, the recall could improve approximately 5%. The result can be considered as prior knowledge for further image tasks, e.g. obstacle avoidance or object recognition.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126331247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical database based on feature parameters for various multimodal expression generation of robot 基于特征参数的分层数据库用于机器人各种多模态表达式的生成
Pub Date : 2010-10-01 DOI: 10.1109/ARSO.2010.5679627
W. Kim, J. Park, Won Hyong Lee, M. Chung
In this paper, we propose reliable, diverse, expansible, and usable expression generation system. Proposed system is to generate synchronized multimodal expression automatically based on hierarchical database and context information such as robot's emotional state and sentence robot is trying to say. Compared to prior system, our system based on feature parameters is much easier to generate new expression and modify expressions according to the robot's emotion. In our system, there are sentence module, emotion module, and expression module. We focus on only robot's expression module. In order to generate expressions automatically, we use outputs of the sentence and emotion modules. We have classified robot sentence under 13 types and robot emotion under 3 types. About all 39 categories and body language, we have constructed behavior database with 128 expressions. For the reliability and the variety of expressions, a professional actor's expression data have been obtained and we requested a cartoonist to draw sketch of robot's expressions corresponding to defined categories.
本文提出了一个可靠、多样、可扩展、可用的表达式生成系统。该系统基于分层数据库和机器人的情绪状态、机器人想要说的句子等上下文信息,自动生成同步的多模态表达。与之前的系统相比,基于特征参数的系统更容易根据机器人的情绪生成新的表情和修改表情。在我们的系统中,有句子模块、情感模块和表达模块。我们只关注机器人的表达模块。为了自动生成表情,我们使用句子和情感模块的输出。我们将机器人语句分为13种类型,机器人情感分为3种类型。针对这39个类别和肢体语言,我们构建了包含128种表情的行为数据库。为了可靠性和表情的多样性,我们获得了专业演员的表情数据,并请漫画家根据定义的类别绘制机器人的表情草图。
{"title":"Hierarchical database based on feature parameters for various multimodal expression generation of robot","authors":"W. Kim, J. Park, Won Hyong Lee, M. Chung","doi":"10.1109/ARSO.2010.5679627","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679627","url":null,"abstract":"In this paper, we propose reliable, diverse, expansible, and usable expression generation system. Proposed system is to generate synchronized multimodal expression automatically based on hierarchical database and context information such as robot's emotional state and sentence robot is trying to say. Compared to prior system, our system based on feature parameters is much easier to generate new expression and modify expressions according to the robot's emotion. In our system, there are sentence module, emotion module, and expression module. We focus on only robot's expression module. In order to generate expressions automatically, we use outputs of the sentence and emotion modules. We have classified robot sentence under 13 types and robot emotion under 3 types. About all 39 categories and body language, we have constructed behavior database with 128 expressions. For the reliability and the variety of expressions, a professional actor's expression data have been obtained and we requested a cartoonist to draw sketch of robot's expressions corresponding to defined categories.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131068841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Control performance of a motion controller for robot-assisted surgery 机器人辅助手术运动控制器的控制性能
Pub Date : 2010-10-01 DOI: 10.1109/ARSO.2010.5679619
Sungchoon Lee, Jeong-Geun Lim, Kyunghwan Kim
Total Knee/Hip Replacement(TKR/THR) is one of the most important orthopedic surgical techniques of this century. If patient's whole joint is damaged, an artificial joint (total hip/knee replacement surgery) can relieve patient's pain and help the patient get back normal activities. The goal for TKR/THR is to relieve the pain in the joint caused by the damage done to the cartilage. The surgeon will replace the damaged parts of the joint. For example, in an arthritic knee the damaged ends of the bones and cartilage are replaced with metal and plastic surfaces that are shaped to restore knee movement and function. In an arthritic hip, the damaged ball (the upper end of the femur) is replaced by a metal ball attached to a metal stem fitted into the femur, and a plastic socket is implanted into the pelvis, replacing the damaged socket. Using the “new” joint shortly after the operation is strongly encouraged. After a TKR/TRH, patient will often stand and begin walking the day after surgery.
全膝关节/髋关节置换术(TKR/THR)是本世纪最重要的骨科手术技术之一。如果患者的整个关节受损,人工关节(全髋关节/膝关节置换手术)可以减轻患者的疼痛,帮助患者恢复正常活动。TKR/THR的目标是减轻由软骨损伤引起的关节疼痛。外科医生将替换关节的受损部分。例如,在患有关节炎的膝关节中,受损的骨头和软骨末端被金属和塑料表面所取代,这些表面被塑形以恢复膝关节的运动和功能。在髋关节关节炎中,受损的球(股骨的上端)被一个金属球取代,金属球连接到股骨的金属杆上,一个塑料窝被植入骨盆,取代受损的窝。强烈建议术后不久使用“新”关节。在TKR/TRH手术后,患者通常会在手术后的第二天站立并开始行走。
{"title":"Control performance of a motion controller for robot-assisted surgery","authors":"Sungchoon Lee, Jeong-Geun Lim, Kyunghwan Kim","doi":"10.1109/ARSO.2010.5679619","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679619","url":null,"abstract":"Total Knee/Hip Replacement(TKR/THR) is one of the most important orthopedic surgical techniques of this century. If patient's whole joint is damaged, an artificial joint (total hip/knee replacement surgery) can relieve patient's pain and help the patient get back normal activities. The goal for TKR/THR is to relieve the pain in the joint caused by the damage done to the cartilage. The surgeon will replace the damaged parts of the joint. For example, in an arthritic knee the damaged ends of the bones and cartilage are replaced with metal and plastic surfaces that are shaped to restore knee movement and function. In an arthritic hip, the damaged ball (the upper end of the femur) is replaced by a metal ball attached to a metal stem fitted into the femur, and a plastic socket is implanted into the pelvis, replacing the damaged socket. Using the “new” joint shortly after the operation is strongly encouraged. After a TKR/TRH, patient will often stand and begin walking the day after surgery.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115515694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Trials of cybernetic human HRP-4C toward humanoid business 控制论人类HRP-4C对类人商业的试验
Pub Date : 2010-10-01 DOI: 10.1109/ARSO.2010.5679688
K. Miura, Shin'ichiro Nakaoka, Shuji Kajita, K. Kaneko, F. Kanehiro, M. Morisawa, K. Yokoi
We have developed a humanoid robot (a cybernetic human called “HRP-4C”) which has the appearance and shape of a human being, can walk and move like one, and interacts with humans using speech recognition. Standing 158 cm tall and weighing 43 kg (including the battery), with the joints and dimensions set to average values for young Japanese females, HRP-4C looks very human-like. In this paper, we present ongoing challenges to create a new bussiness in the contents industry with HRP-4C.
我们开发了一种人形机器人(称为“HRP-4C”的控制学人类),它具有人类的外观和形状,可以像人类一样走路和移动,并通过语音识别与人类互动。HRP-4C身高158厘米,体重43公斤(包括电池),关节和尺寸设定为日本年轻女性的平均值,看起来非常像人类。在本文中,我们提出了使用HRP-4C在内容行业中创建新业务的持续挑战。
{"title":"Trials of cybernetic human HRP-4C toward humanoid business","authors":"K. Miura, Shin'ichiro Nakaoka, Shuji Kajita, K. Kaneko, F. Kanehiro, M. Morisawa, K. Yokoi","doi":"10.1109/ARSO.2010.5679688","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679688","url":null,"abstract":"We have developed a humanoid robot (a cybernetic human called “HRP-4C”) which has the appearance and shape of a human being, can walk and move like one, and interacts with humans using speech recognition. Standing 158 cm tall and weighing 43 kg (including the battery), with the joints and dimensions set to average values for young Japanese females, HRP-4C looks very human-like. In this paper, we present ongoing challenges to create a new bussiness in the contents industry with HRP-4C.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114895152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Robust robot's attention for human based on the multi-modal sensor and robot behavior 基于多模态传感器和机器人行为的鲁棒机器人对人的关注
Pub Date : 2010-10-01 DOI: 10.1109/ARSO.2010.5680037
Sangseok Yun, C. G. Kim, Munsang Kim, Mun-Taek Choi
In this paper, we propose the robust robot's attention for human based on the multi-modal sensor and robot behavior. All of the robot components for attention are operating on the intelligent robot software architecture. Human search collect the human information from the sensing information for vision and voice in various illumination change and dynamic environments. And human tracker follows the face trajectory with efficiency and safety. Unlike common belief, the biggest obstacle and competitive factor in robotics is expected to be human robot interaction. Since robot intelligence is not yet at a practical level, the creation of a general interaction manager using an intelligence system will not be realized for some time. Instead, it focuses on one-way speaking and expressing based on emotion to human. Experimental results show that the proposed scheme works successfully in real environments.
本文提出了基于多模态传感器和机器人行为的鲁棒机器人对人的关注。所有需要关注的机器人部件都是在智能机器人软件架构上运行的。人体搜索是在各种光照变化和动态环境下,从视觉和语音感知信息中收集人体信息。人体跟踪器可以高效、安全地跟踪人脸轨迹。与普遍的看法不同,机器人最大的障碍和竞争因素预计是人机交互。由于机器人智能尚未达到实用水平,因此使用智能系统创建通用交互管理器在一段时间内将无法实现。相反,它侧重于基于情感的单向说话和对人类的表达。实验结果表明,该方法在实际环境中是有效的。
{"title":"Robust robot's attention for human based on the multi-modal sensor and robot behavior","authors":"Sangseok Yun, C. G. Kim, Munsang Kim, Mun-Taek Choi","doi":"10.1109/ARSO.2010.5680037","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5680037","url":null,"abstract":"In this paper, we propose the robust robot's attention for human based on the multi-modal sensor and robot behavior. All of the robot components for attention are operating on the intelligent robot software architecture. Human search collect the human information from the sensing information for vision and voice in various illumination change and dynamic environments. And human tracker follows the face trajectory with efficiency and safety. Unlike common belief, the biggest obstacle and competitive factor in robotics is expected to be human robot interaction. Since robot intelligence is not yet at a practical level, the creation of a general interaction manager using an intelligence system will not be realized for some time. Instead, it focuses on one-way speaking and expressing based on emotion to human. Experimental results show that the proposed scheme works successfully in real environments.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Design of mine detection robot for Korean mine field 韩国雷场地雷探测机器人的设计
Pub Date : 2010-10-01 DOI: 10.1109/ARSO.2010.5679622
S. Kang, Junho Choi, SeungBeum Suh, Sungchul Kang
This paper presents the critical design constraints of mine detection robots for Korean minefield. As a part of a demining robot development project, the environment of Korean minefield was investigated, and the requirements for suitable robot design were determined. Most of landmines in Korean minefield were buried close to the demilitarized zone (DMZ) more than half of a century ago. The areas have not been urbanized at all since the Korea War, and the potential locations of the explosives by military tactics have been covered by vegetation. Therefore, at the initial stage of the demining robot system development, the target areas were investigated and the suitable design for Korean minefield terrain was determined. The design includes a track type main platform with a simple moving arm and a mine detection sensor (consists of a metal detector and a GPR at this stage). In addition, in order to maintain the effective distance between the landmine sensors and ground surface, a distance sensing technique for terrain adaptability was developed and briefly introduced in this paper. The overall design of this robot was determined by considering the speed of the whole mine detection process and a point of economic view to replace human in minefield. Thus, the detail of the conceptual design and the mine detection scenario is presented in this paper.
本文介绍了韩国雷区地雷探测机器人的关键设计约束。作为排雷机器人开发项目的一部分,对韩国雷区环境进行了调查,确定了合适的机器人设计要求。韩国雷区的大部分地雷都埋在半个多世纪前的非军事区附近。自6•25战争以来,这些地区根本没有城市化,军事战术的潜在爆炸地点被植被覆盖。因此,在排雷机器人系统开发的初始阶段,对目标区域进行了调查,确定了适合韩国雷区地形的设计。该设计包括履带式主平台和简单移动臂和地雷探测传感器(现阶段由金属探测器和探地雷达组成)。此外,为了保持地雷传感器与地面的有效距离,本文还开发了一种地形适应性遥感技术,并对其进行了简要介绍。从整个探雷过程的速度和经济角度出发,确定了该机器人的总体设计方案。因此,本文详细介绍了该系统的概念设计和地雷探测场景。
{"title":"Design of mine detection robot for Korean mine field","authors":"S. Kang, Junho Choi, SeungBeum Suh, Sungchul Kang","doi":"10.1109/ARSO.2010.5679622","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679622","url":null,"abstract":"This paper presents the critical design constraints of mine detection robots for Korean minefield. As a part of a demining robot development project, the environment of Korean minefield was investigated, and the requirements for suitable robot design were determined. Most of landmines in Korean minefield were buried close to the demilitarized zone (DMZ) more than half of a century ago. The areas have not been urbanized at all since the Korea War, and the potential locations of the explosives by military tactics have been covered by vegetation. Therefore, at the initial stage of the demining robot system development, the target areas were investigated and the suitable design for Korean minefield terrain was determined. The design includes a track type main platform with a simple moving arm and a mine detection sensor (consists of a metal detector and a GPR at this stage). In addition, in order to maintain the effective distance between the landmine sensors and ground surface, a distance sensing technique for terrain adaptability was developed and briefly introduced in this paper. The overall design of this robot was determined by considering the speed of the whole mine detection process and a point of economic view to replace human in minefield. Thus, the detail of the conceptual design and the mine detection scenario is presented in this paper.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133640722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Eye motion generation in a mobile service robot ‘SILBOT II’ 移动服务机器人SILBOT II的眼动生成
Pub Date : 2010-10-01 DOI: 10.1109/ARSO.2010.5679626
Kyung-Geune Oh, Chan-Yul Jung, Mun-Taek Choi, Seung-Jong Kim
Face of a robot capable of facial expression has a complex structure inside consisting of many actuators, sensors, and other parts. Specially, eyes and its neighbor elements such as upper/lower eyelids, cameras and eyelids, are very densely arranged. In this paper, a compact eyeball module is suggested which is driven with Teflon tube enveloped metal wires, so that one directional motion can be made with one wire pushed or pulled by a motor. And the cylindrical and ball-type pivot parts are used in ends of the tubes and wires, so as to permit their rotation during eye movement. The performance of the suggested module is verified through the comparison between experimental and analytical results. The results are well matched with each other and the degree of positioning precision is also good.
具有面部表情的机器人面部内部结构复杂,由许多致动器、传感器和其他部件组成。特别地,眼睛及其邻近的元素,如上/下眼睑、照相机和眼睑,排列得非常密集。本文提出了一种紧凑的眼球模块,该模块采用聚四氟乙烯管包裹金属丝驱动,通过电机推拉一根金属丝即可实现一个方向运动。在管子和电线的两端使用圆柱和球型枢轴部件,以便在眼球运动时转动。通过实验和分析结果的对比,验证了所提模块的性能。结果吻合良好,定位精度也较好。
{"title":"Eye motion generation in a mobile service robot ‘SILBOT II’","authors":"Kyung-Geune Oh, Chan-Yul Jung, Mun-Taek Choi, Seung-Jong Kim","doi":"10.1109/ARSO.2010.5679626","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679626","url":null,"abstract":"Face of a robot capable of facial expression has a complex structure inside consisting of many actuators, sensors, and other parts. Specially, eyes and its neighbor elements such as upper/lower eyelids, cameras and eyelids, are very densely arranged. In this paper, a compact eyeball module is suggested which is driven with Teflon tube enveloped metal wires, so that one directional motion can be made with one wire pushed or pulled by a motor. And the cylindrical and ball-type pivot parts are used in ends of the tubes and wires, so as to permit their rotation during eye movement. The performance of the suggested module is verified through the comparison between experimental and analytical results. The results are well matched with each other and the degree of positioning precision is also good.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114187220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2010 IEEE Workshop on Advanced Robotics and its Social Impacts
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1