首页 > 最新文献

2015 International Conference on Advanced Robotics (ICAR)最新文献

英文 中文
On the EMG-based torque estimation for humans coupled with a force-controlled elbow exoskeleton 结合力控肘外骨骼的肌电转矩估计研究
Pub Date : 2015-09-10 DOI: 10.1109/ICAR.2015.7251472
J. B. Ullauri, L. Peternel, B. Ugurlu, Yoji Yamada, J. Morimoto
Exoskeletons are successful at supporting human motion only when the necessary amount of power is provided at the right time. Exoskeleton control based on EMG signals can be utilized to command the required amount of support in real-time. To this end, one needs to map human muscle activity to the desired task-specific exoskeleton torques. In order to achieve such mapping, this paper analyzes two distinct methods to estimate the human-elbow-joint torque based on the related muscle activity. The first model is adopted from pneumatic artificial muscles (PAMs). The second model is based on a machine learning method known as Gaussian Process Regression (GPR). The performance of both approaches were assessed based on their ability to estimate the elbow-joint torque of two able-bodied subjects using EMG signals that were collected from biceps and triceps muscles. The experiments suggest that the GPR-based approach provides relatively more favorable predictions.
只有在适当的时候提供必要的能量,外骨骼才能成功地支持人类的运动。基于肌电图信号的外骨骼控制可用于实时控制所需的支持量。为此,需要将人体肌肉活动映射到所需的任务特定外骨骼扭矩。为了实现这种映射,本文分析了基于相关肌肉活动估计人体肘关节扭矩的两种不同方法。第一种模型采用气动人工肌肉(PAMs)。第二个模型基于一种被称为高斯过程回归(GPR)的机器学习方法。两种方法的性能都是基于它们使用从二头肌和三头肌收集的肌电图信号来估计两个健全受试者的肘关节扭矩的能力来评估的。实验表明,基于gpr的方法提供了相对更有利的预测。
{"title":"On the EMG-based torque estimation for humans coupled with a force-controlled elbow exoskeleton","authors":"J. B. Ullauri, L. Peternel, B. Ugurlu, Yoji Yamada, J. Morimoto","doi":"10.1109/ICAR.2015.7251472","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251472","url":null,"abstract":"Exoskeletons are successful at supporting human motion only when the necessary amount of power is provided at the right time. Exoskeleton control based on EMG signals can be utilized to command the required amount of support in real-time. To this end, one needs to map human muscle activity to the desired task-specific exoskeleton torques. In order to achieve such mapping, this paper analyzes two distinct methods to estimate the human-elbow-joint torque based on the related muscle activity. The first model is adopted from pneumatic artificial muscles (PAMs). The second model is based on a machine learning method known as Gaussian Process Regression (GPR). The performance of both approaches were assessed based on their ability to estimate the elbow-joint torque of two able-bodied subjects using EMG signals that were collected from biceps and triceps muscles. The experiments suggest that the GPR-based approach provides relatively more favorable predictions.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115498196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Transferring object grasping knowledge and skill across different robotic platforms 在不同机器人平台之间传递物体抓取知识和技能
Pub Date : 2015-07-27 DOI: 10.1109/ICAR.2015.7251502
A. Paikan, David Schiebener, Mirko Wächter, T. Asfour, G. Metta, L. Natale
This study describes the transfer of object grasping skills between two different humanoid robots with different software frameworks. We realize such a knowledge and skill transfer between the humanoid robots iCub and ARMAR-III. These two robots have different kinematics and are programmed using different middlewares, YARP and ArmarX. We developed a bridge system to allow for the execution of grasping skills of ARMAR-III on iCub. As the embodiment differs, the known feasible grasps for the one robot are not always feasible for the other robot. We propose a reactive correction behavior to detect failure of a grasp during its execution, to correct it until it is successful, and thus adapt the known grasp definition to the new embodiment.
本研究描述了两种不同的类人机器人在不同软件框架下抓取物体技能的迁移。我们在仿人机器人iCub和ARMAR-III之间实现了这样的知识和技能转移。这两种机器人具有不同的运动学,并使用不同的中间件YARP和ArmarX进行编程。我们开发了一个桥接系统,允许在iCub上执行ARMAR-III的抓取技能。由于实施例的不同,一个机器人的已知可行抓取对于另一个机器人并不总是可行的。我们提出了一种反应性纠正行为来检测抓取执行过程中的故障,并对其进行纠正,直到抓取成功,从而使已知的抓取定义适应新的实施例。
{"title":"Transferring object grasping knowledge and skill across different robotic platforms","authors":"A. Paikan, David Schiebener, Mirko Wächter, T. Asfour, G. Metta, L. Natale","doi":"10.1109/ICAR.2015.7251502","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251502","url":null,"abstract":"This study describes the transfer of object grasping skills between two different humanoid robots with different software frameworks. We realize such a knowledge and skill transfer between the humanoid robots iCub and ARMAR-III. These two robots have different kinematics and are programmed using different middlewares, YARP and ArmarX. We developed a bridge system to allow for the execution of grasping skills of ARMAR-III on iCub. As the embodiment differs, the known feasible grasps for the one robot are not always feasible for the other robot. We propose a reactive correction behavior to detect failure of a grasp during its execution, to correct it until it is successful, and thus adapt the known grasp definition to the new embodiment.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124263430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Cooperative control of manipulator robotic systems with unknown dynamics 未知动力学下机械臂机器人系统的协同控制
Pub Date : 2015-07-27 DOI: 10.1109/ICAR.2015.7251487
E. Mehrabi, H. Talebi, M. Zarei-nejad, I. Sharifi
Cooperative control of manipulator robotic systems to grasp and handle an object is studied in this paper. Based on the passive decomposition approach, the cooperative system is decomposed into decoupled shaped and locked systems. Then, regressor free adaptive control laws for the decoupled shaped and locked systems are proposed. Despite existing shaped and locked approaches in the literature, the proposed approach guarantees the passivity of the closed loop system when the robots dynamics are unknown. Simulation results verify the accuracy of the proposed control scheme.
本文研究了机械臂机器人系统抓取和处理物体的协同控制。基于被动分解方法,将合作系统分解为解耦的形系统和锁定系统。然后,提出了解耦形锁系统的无回归自适应控制律。尽管已有的文献中的形状和锁定方法,该方法保证了当机器人动力学未知时闭环系统的无源性。仿真结果验证了所提控制方案的准确性。
{"title":"Cooperative control of manipulator robotic systems with unknown dynamics","authors":"E. Mehrabi, H. Talebi, M. Zarei-nejad, I. Sharifi","doi":"10.1109/ICAR.2015.7251487","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251487","url":null,"abstract":"Cooperative control of manipulator robotic systems to grasp and handle an object is studied in this paper. Based on the passive decomposition approach, the cooperative system is decomposed into decoupled shaped and locked systems. Then, regressor free adaptive control laws for the decoupled shaped and locked systems are proposed. Despite existing shaped and locked approaches in the literature, the proposed approach guarantees the passivity of the closed loop system when the robots dynamics are unknown. Simulation results verify the accuracy of the proposed control scheme.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127546732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Dynamic process migration in heterogeneous ROS-based environments 基于ros的异构环境中的动态流程迁移
Pub Date : 2015-07-27 DOI: 10.1109/ICAR.2015.7251505
José Cano, Eduardo J. Molinos, V. Nagarajan, S. Vijayakumar
In distributed (mobile) robotics environments, the different computing substrates offer flexible resource allocation options to perform computations that implement an overall system goal. The AnyScale concept that we introduce and describe in this paper exploits this redundancy by dynamically allocating tasks to appropriate substrates (or scales) to optimize some level of system performance while migrating others depending on current resource and performance parameters. In this paper, we demonstrate this concept with a general ROS-based infrastructure that solves the task allocation problem by optimising the system performance while correctly reacting to unpredictable events at the same time. Assignment decisions are based on a characterisation of the static/dynamic parameters that represent the system and its interaction with the environment. We instantiate our infrastructure on a case study application, in which a mobile robot navigates along the floor of a building trying to reach a predefined goal. Experimental validation demonstrates more robust performance (around a third improvement in metrics) under the Anyscale implementation framework.
在分布式(移动)机器人环境中,不同的计算基板提供灵活的资源分配选项,以执行实现整体系统目标的计算。我们在本文中介绍和描述的AnyScale概念通过动态地将任务分配到适当的基板(或规模)来优化某些级别的系统性能,同时根据当前资源和性能参数迁移其他级别的系统性能,从而利用这种冗余。在本文中,我们用一个通用的基于ros的基础设施来演示这个概念,该基础设施通过优化系统性能来解决任务分配问题,同时正确地对不可预测的事件做出反应。分配决策是基于代表系统及其与环境的相互作用的静态/动态参数的特征。我们在一个案例研究应用程序上实例化了我们的基础设施,在这个应用程序中,一个移动机器人沿着建筑物的楼层导航,试图达到一个预定义的目标。在Anyscale实现框架下,实验验证证明了更稳健的性能(大约三分之一的度量改进)。
{"title":"Dynamic process migration in heterogeneous ROS-based environments","authors":"José Cano, Eduardo J. Molinos, V. Nagarajan, S. Vijayakumar","doi":"10.1109/ICAR.2015.7251505","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251505","url":null,"abstract":"In distributed (mobile) robotics environments, the different computing substrates offer flexible resource allocation options to perform computations that implement an overall system goal. The AnyScale concept that we introduce and describe in this paper exploits this redundancy by dynamically allocating tasks to appropriate substrates (or scales) to optimize some level of system performance while migrating others depending on current resource and performance parameters. In this paper, we demonstrate this concept with a general ROS-based infrastructure that solves the task allocation problem by optimising the system performance while correctly reacting to unpredictable events at the same time. Assignment decisions are based on a characterisation of the static/dynamic parameters that represent the system and its interaction with the environment. We instantiate our infrastructure on a case study application, in which a mobile robot navigates along the floor of a building trying to reach a predefined goal. Experimental validation demonstrates more robust performance (around a third improvement in metrics) under the Anyscale implementation framework.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126182202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Robot-mediated mixed gesture imitation skill training for young children with ASD 机器人介导的幼儿ASD混合手势模仿技能训练
Pub Date : 2015-07-27 DOI: 10.1109/ICAR.2015.7251436
Zhi Zheng, Eric M. Young, A. Swanson, A. Weitlauf, Z. Warren, N. Sarkar
Autism Spectrum Disorder (ASD) impacts 1 in 68 children in the U.S. with tremendous consequent cost in terms of care and treatment. Evidence suggests that early intervention is critical for optimal treatment results. Robots have been shown to have great potential to attract attention of children with ASD and can facilitate early interventions on core deficits. In this paper, we propose a robotic platform that mediates imitation skill training for young children with ASD. Imitation skills are considered to be one of the most important skill deficits in children with ASD, which has a profound impact on social communication. While a few previous works have provided methods for single gesture imitation training, the current paper extends the training to incorporate mixed gestures consisting of multiple single gestures during intervention. A preliminary user study showed that the proposed robotic system was able to stimulate mixed gesture imitation in young children with ASD with promising gesture recognition accuracy.
在美国,每68名儿童中就有1名患有自闭症谱系障碍(ASD),随之而来的是护理和治疗方面的巨大成本。有证据表明,早期干预对获得最佳治疗效果至关重要。机器人已经被证明有很大的潜力来吸引自闭症儿童的注意力,并且可以促进对核心缺陷的早期干预。在本文中,我们提出了一个机器人平台,调解模仿技能训练幼儿ASD。模仿技能被认为是ASD儿童最重要的技能缺陷之一,它对社会沟通有着深远的影响。虽然之前的一些工作已经提供了单一手势模仿训练的方法,但本文将训练扩展到在干预期间纳入由多个单一手势组成的混合手势。一项初步的用户研究表明,提出的机器人系统能够刺激自闭症儿童的混合手势模仿,并具有良好的手势识别准确性。
{"title":"Robot-mediated mixed gesture imitation skill training for young children with ASD","authors":"Zhi Zheng, Eric M. Young, A. Swanson, A. Weitlauf, Z. Warren, N. Sarkar","doi":"10.1109/ICAR.2015.7251436","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251436","url":null,"abstract":"Autism Spectrum Disorder (ASD) impacts 1 in 68 children in the U.S. with tremendous consequent cost in terms of care and treatment. Evidence suggests that early intervention is critical for optimal treatment results. Robots have been shown to have great potential to attract attention of children with ASD and can facilitate early interventions on core deficits. In this paper, we propose a robotic platform that mediates imitation skill training for young children with ASD. Imitation skills are considered to be one of the most important skill deficits in children with ASD, which has a profound impact on social communication. While a few previous works have provided methods for single gesture imitation training, the current paper extends the training to incorporate mixed gestures consisting of multiple single gestures during intervention. A preliminary user study showed that the proposed robotic system was able to stimulate mixed gesture imitation in young children with ASD with promising gesture recognition accuracy.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115571808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
GMM-based detection of human hand actions for robot spatial attention 基于gmm的人手动作检测对机器人空间注意力的影响
Pub Date : 2015-07-27 DOI: 10.1109/ICAR.2015.7251500
Riccardo Monica, J. Aleotti, S. Caselli
In this paper, a spatial attention approach is presented for a robot manipulator equipped with a Kinect range sensor in eye-in-hand configuration. The location of salient object manipulation actions performed by the user is detected by analyzing the motion of the user hand. Relevance of user activities is determined by an attentional approach based on Gaussian mixture models. A next best view planner focuses the viewpoint of the eye-in-hand sensor towards the regions of the workspace that are most salient. 3D scene representation is updated by using a modified version of the KinectFusion algorithm that exploits the robot kinematics. Experiments are reported comparing two variations of next best view strategies.
提出了一种基于Kinect距离传感器的机械臂眼手构型空间注意方法。通过分析用户手的运动来检测用户执行的显著对象操作动作的位置。用户活动的相关性由基于高斯混合模型的注意力方法确定。其次最好的视图规划器将手眼传感器的视点聚焦到工作空间中最突出的区域。3D场景表示通过使用利用机器人运动学的KinectFusion算法的修改版本进行更新。实验报告比较了次优视角策略的两种变化。
{"title":"GMM-based detection of human hand actions for robot spatial attention","authors":"Riccardo Monica, J. Aleotti, S. Caselli","doi":"10.1109/ICAR.2015.7251500","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251500","url":null,"abstract":"In this paper, a spatial attention approach is presented for a robot manipulator equipped with a Kinect range sensor in eye-in-hand configuration. The location of salient object manipulation actions performed by the user is detected by analyzing the motion of the user hand. Relevance of user activities is determined by an attentional approach based on Gaussian mixture models. A next best view planner focuses the viewpoint of the eye-in-hand sensor towards the regions of the workspace that are most salient. 3D scene representation is updated by using a modified version of the KinectFusion algorithm that exploits the robot kinematics. Experiments are reported comparing two variations of next best view strategies.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121983653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reactive, task-specific object manipulation by metric reinforcement learning 通过度量强化学习的反应性、任务特定对象操作
Pub Date : 2015-07-27 DOI: 10.1109/ICAR.2015.7251511
Simon Hangl, Emre Ugur, S. Szedmák, J. Piater, A. Ude
In the context of manipulation of dynamical systems, it is not trivial to design controllers that can cope with unpredictable changes in the system being manipulated. For example, in a pouring task, the target cup might start moving or the agent may decide to change the amount of the liquid during action execution. In order to cope with these situations, the robot should smoothly (and timely) change its execution policy based on the requirements of the new situation. In this paper, we propose a robust method that allows the robot to smoothly and successfully react to such changes. The robot first learns a set of execution trajectories that can solve a number of tasks in different situations. When encountered with a novel situation, the robot smoothly adapts its trajectory to a new one that is generated by weighted linear combination of the previously learned trajectories, where the weights are computed using a metric that depends on the task. This task-dependent metric is automatically learned in the state space of the robot, rather than the motor control space, and further optimized using using reinforcement learning (RL) framework. We discuss that our system can learn and model various manipulation tasks such as pouring or reaching; and can successfully react to a wide range of perturbations introduced during task executions. We evaluated our method against ground truth with a synthetic trajectory dataset, and verified it in grasping and pouring tasks with a real robot.
在动态系统操纵的背景下,设计能够处理被操纵系统中不可预测变化的控制器并非易事。例如,在浇注任务中,目标杯子可能开始移动,或者代理可能决定在操作执行期间改变液体的量。为了应对这些情况,机器人应该根据新情况的要求顺利(及时)地改变其执行策略。在本文中,我们提出了一种鲁棒方法,使机器人能够顺利成功地对这些变化做出反应。机器人首先学习一组执行轨迹,可以在不同的情况下解决许多任务。当遇到新情况时,机器人可以平滑地将其轨迹调整为由先前学习轨迹的加权线性组合生成的新轨迹,其中权重使用依赖于任务的度量来计算。这种与任务相关的度量是在机器人的状态空间中自动学习的,而不是在电机控制空间中,并使用强化学习(RL)框架进一步优化。我们讨论了我们的系统可以学习和建模各种操作任务,如浇注或到达;并且能够成功地对任务执行过程中引入的各种干扰做出反应。我们用一个合成的轨迹数据集对我们的方法进行了地面真实情况的评估,并在一个真实的机器人抓取和倾倒任务中进行了验证。
{"title":"Reactive, task-specific object manipulation by metric reinforcement learning","authors":"Simon Hangl, Emre Ugur, S. Szedmák, J. Piater, A. Ude","doi":"10.1109/ICAR.2015.7251511","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251511","url":null,"abstract":"In the context of manipulation of dynamical systems, it is not trivial to design controllers that can cope with unpredictable changes in the system being manipulated. For example, in a pouring task, the target cup might start moving or the agent may decide to change the amount of the liquid during action execution. In order to cope with these situations, the robot should smoothly (and timely) change its execution policy based on the requirements of the new situation. In this paper, we propose a robust method that allows the robot to smoothly and successfully react to such changes. The robot first learns a set of execution trajectories that can solve a number of tasks in different situations. When encountered with a novel situation, the robot smoothly adapts its trajectory to a new one that is generated by weighted linear combination of the previously learned trajectories, where the weights are computed using a metric that depends on the task. This task-dependent metric is automatically learned in the state space of the robot, rather than the motor control space, and further optimized using using reinforcement learning (RL) framework. We discuss that our system can learn and model various manipulation tasks such as pouring or reaching; and can successfully react to a wide range of perturbations introduced during task executions. We evaluated our method against ground truth with a synthetic trajectory dataset, and verified it in grasping and pouring tasks with a real robot.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128564001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Visual matching of stroke order in robotic calligraphy 机器人书法中笔画顺序的视觉匹配
Pub Date : 2015-07-27 DOI: 10.1109/ICAR.2015.7251496
Hsien-I Lin, Yu-Che Huang
Robotic calligraphy is an interesting problem and recently draws much attention. Two major problems in robotic calligraphy are stroke shape and stroke order. Most of previous work focused on controlling brush trajectory, pressure, velocity, and acceleration to draw a desired stroke shape. As for stroke order, it was manually given from a database. Even for a software of optical character recognition (OCR), it cannot recognize the stroke order from a character image. This paper describes the automatic extraction of the stroke order of a Chinese character by visual matching. Specifically speaking, the stroke order of a Chinese character on an image can be automatically generated by the association of the standard image of the same character given with its stroke order. The proposed visual-matching method extracts the features of the Hough Lines of an input image and uses support vector machine (SVM) to associate the features with the ones of the standard image. The features used in the proposed method were evaluated on several Chinese characters. Two famous Chinese characters “Country” and “Dragon” were used to demonstrate the feasibility of the proposed method. The matched rate of the stroke order of “Country” and “Dragon” were 95.8% and 90.3%, respectively.
机器人书法是一个有趣的问题,最近引起了很多关注。机器人书法的两个主要问题是笔画形状和笔画顺序。大多数以前的工作集中在控制笔刷轨迹,压力,速度和加速度,以绘制所需的笔画形状。至于笔画顺序,则是手动从数据库中给出的。即使是光学字符识别(OCR)软件,也无法从字符图像中识别笔画顺序。本文描述了一种基于视觉匹配的汉字笔画顺序自动提取方法。具体来说,汉字在图像上的笔画顺序可以通过将给定的同一汉字的标准图像与其笔画顺序相关联而自动生成。提出的视觉匹配方法提取输入图像的霍夫线特征,并使用支持向量机(SVM)将特征与标准图像的霍夫线特征进行关联。在几个汉字上对所提出的特征进行了评价。以“国”和“龙”两个著名汉字为例,论证了该方法的可行性。“国”与“龙”的笔画顺序匹配率分别为95.8%和90.3%。
{"title":"Visual matching of stroke order in robotic calligraphy","authors":"Hsien-I Lin, Yu-Che Huang","doi":"10.1109/ICAR.2015.7251496","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251496","url":null,"abstract":"Robotic calligraphy is an interesting problem and recently draws much attention. Two major problems in robotic calligraphy are stroke shape and stroke order. Most of previous work focused on controlling brush trajectory, pressure, velocity, and acceleration to draw a desired stroke shape. As for stroke order, it was manually given from a database. Even for a software of optical character recognition (OCR), it cannot recognize the stroke order from a character image. This paper describes the automatic extraction of the stroke order of a Chinese character by visual matching. Specifically speaking, the stroke order of a Chinese character on an image can be automatically generated by the association of the standard image of the same character given with its stroke order. The proposed visual-matching method extracts the features of the Hough Lines of an input image and uses support vector machine (SVM) to associate the features with the ones of the standard image. The features used in the proposed method were evaluated on several Chinese characters. Two famous Chinese characters “Country” and “Dragon” were used to demonstrate the feasibility of the proposed method. The matched rate of the stroke order of “Country” and “Dragon” were 95.8% and 90.3%, respectively.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114064728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A high frequency 3D LiDAR with enhanced measurement density via Papoulis-Gerchberg 通过Papoulis-Gerchberg增强测量密度的高频3D激光雷达
Pub Date : 2015-07-27 DOI: 10.1109/ICAR.2015.7251509
Bengisu Ozbay, Elvan Kuzucu, M. Gul, Dilan Ozturk, M. Tasci, A. Arisoy, H. Sirin, Ismail Uyanik
Light Detection and Ranging (LiDAR) devices are gaining more importance for obtaining sensory information in mobile robot applications. However, existing solutions in literature yield low frequency outputs with huge measurement delay to obtain 3D range image of the environment. This paper introduces the design and construction of a 3D range sensor based on rotating a 2D LiDAR around its pitch axis. Different than previous approaches, we adjust our scan frequency to 5 Hz to support its application on mobile robot platforms. However, increasing scan frequency drastically reduces the measurement density in 3D range images. Therefore, we propose two post-processing algorithms to increase measurement density while keeping the 3D scan frequency at an acceptable level. To this end, we use an extended version of the Papoulis-Gerchberg algorithm to achieve super-resolution on 3D range data by estimating the unmeasured samples in the environment. In addition, we propose a probabilistic obstacle reconstruction algorithm to consider the probabilities of the estimated (virtual) points and to obtain a very fast prediction about the existence and shape of the obstacles.
在移动机器人的应用中,光探测和测距(LiDAR)设备对于获取感官信息越来越重要。然而,文献中现有的解决方案在获取环境的三维距离图像时,输出频率低,测量延迟大。本文介绍了一种基于二维激光雷达绕俯仰轴旋转的三维距离传感器的设计与构造。与之前的方法不同,我们将扫描频率调整为5 Hz,以支持其在移动机器人平台上的应用。然而,增加扫描频率大大降低了三维范围图像的测量密度。因此,我们提出了两种后处理算法来增加测量密度,同时将3D扫描频率保持在可接受的水平。为此,我们使用扩展版的Papoulis-Gerchberg算法,通过估计环境中未测量的样本来实现3D距离数据的超分辨率。此外,我们提出了一种概率障碍重建算法来考虑估计(虚拟)点的概率,并获得关于障碍物存在和形状的非常快速的预测。
{"title":"A high frequency 3D LiDAR with enhanced measurement density via Papoulis-Gerchberg","authors":"Bengisu Ozbay, Elvan Kuzucu, M. Gul, Dilan Ozturk, M. Tasci, A. Arisoy, H. Sirin, Ismail Uyanik","doi":"10.1109/ICAR.2015.7251509","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251509","url":null,"abstract":"Light Detection and Ranging (LiDAR) devices are gaining more importance for obtaining sensory information in mobile robot applications. However, existing solutions in literature yield low frequency outputs with huge measurement delay to obtain 3D range image of the environment. This paper introduces the design and construction of a 3D range sensor based on rotating a 2D LiDAR around its pitch axis. Different than previous approaches, we adjust our scan frequency to 5 Hz to support its application on mobile robot platforms. However, increasing scan frequency drastically reduces the measurement density in 3D range images. Therefore, we propose two post-processing algorithms to increase measurement density while keeping the 3D scan frequency at an acceptable level. To this end, we use an extended version of the Papoulis-Gerchberg algorithm to achieve super-resolution on 3D range data by estimating the unmeasured samples in the environment. In addition, we propose a probabilistic obstacle reconstruction algorithm to consider the probabilities of the estimated (virtual) points and to obtain a very fast prediction about the existence and shape of the obstacles.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115976916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Subjective difficulty and indicators of performance of joystick-based robot arm teleoperation with auditory feedback 基于听觉反馈的操纵杆机械臂遥操作的主观难度和性能指标
Pub Date : 2015-07-27 DOI: 10.1109/ICAR.2015.7251439
N. Mavridis, G. Pierris, P. Gallina, N. Moustakas, A. Astaras
Joystick-based teleoperation is a dominant method for remotely controlling various types of robots, such as excavators, cranes, and space telerobotics. Our ultimate goal is to create effective methods for training and assessing human operators of joystick-controlled robots. Towards that goal, an extensive study consisting of a total of 38 experimental subjects on both simulated as well as a physical robot, using either no feedback or auditory feedback, has been performed. In this paper, we present the complete experimental setup and we report only on the 18 experimental subjects teleoperating the simulated robot. Multiple observables were recorded, including not only joystick and robot angles and timings, but also subjective measures of difficulty, personality and usability data, and automated analysis of facial expressions and blink rate of the subjects. Our initial results indicate that: First, that the subjective difficulty of teleoperation with auditory feedback has smaller variance as compared to teleoperation without feedback. Second, that the subjective difficulty of a task is linearly related with the logarithm of task completion time. Third, we introduce two important indicators of operator performance, namely the Average Velocity of Robot Joints (AVRJ), and the Correct-to-Wrong-Joystick Direction Ratio (CWJR), and we show how these relate to accumulated user experience and with task time. We conclude with a forward-looking discussion including future steps.
基于操纵杆的远程操作是远程控制各种类型机器人的主要方法,如挖掘机、起重机和空间远程机器人。我们的最终目标是创造有效的方法来训练和评估操纵杆控制机器人的人类操作员。为了实现这一目标,一项广泛的研究由38个实验对象组成,包括模拟机器人和物理机器人,使用无反馈或听觉反馈。在本文中,我们提出了完整的实验设置,我们只报告了18个实验对象远程操作模拟机器人。记录了多个观察结果,不仅包括操纵杆和机器人的角度和时间,还包括难度、个性和可用性数据的主观测量,以及受试者面部表情和眨眼频率的自动分析。我们的初步研究结果表明:第一,有听觉反馈的遥操作主观难度与无反馈的遥操作主观难度差异较小。第二,任务的主观难度与任务完成时间的对数呈线性相关。第三,我们引入了操作员性能的两个重要指标,即机器人关节的平均速度(AVRJ)和操纵杆的正误方向比(CWJR),并展示了它们与积累的用户体验和任务时间的关系。最后,我们进行前瞻性讨论,包括今后的步骤。
{"title":"Subjective difficulty and indicators of performance of joystick-based robot arm teleoperation with auditory feedback","authors":"N. Mavridis, G. Pierris, P. Gallina, N. Moustakas, A. Astaras","doi":"10.1109/ICAR.2015.7251439","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251439","url":null,"abstract":"Joystick-based teleoperation is a dominant method for remotely controlling various types of robots, such as excavators, cranes, and space telerobotics. Our ultimate goal is to create effective methods for training and assessing human operators of joystick-controlled robots. Towards that goal, an extensive study consisting of a total of 38 experimental subjects on both simulated as well as a physical robot, using either no feedback or auditory feedback, has been performed. In this paper, we present the complete experimental setup and we report only on the 18 experimental subjects teleoperating the simulated robot. Multiple observables were recorded, including not only joystick and robot angles and timings, but also subjective measures of difficulty, personality and usability data, and automated analysis of facial expressions and blink rate of the subjects. Our initial results indicate that: First, that the subjective difficulty of teleoperation with auditory feedback has smaller variance as compared to teleoperation without feedback. Second, that the subjective difficulty of a task is linearly related with the logarithm of task completion time. Third, we introduce two important indicators of operator performance, namely the Average Velocity of Robot Joints (AVRJ), and the Correct-to-Wrong-Joystick Direction Ratio (CWJR), and we show how these relate to accumulated user experience and with task time. We conclude with a forward-looking discussion including future steps.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134061312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2015 International Conference on Advanced Robotics (ICAR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1