首页 > 最新文献

2018 Second IEEE International Conference on Robotic Computing (IRC)最新文献

英文 中文
Learning Object Classifiers with Limited Human Supervision on a Physical Robot 在有限人类监督的物理机器人上学习对象分类器
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00060
Christopher Eriksen, A. Nicolai, W. Smart
In recent years, deep learning approaches have been leveraged to achieve impressive results in object recognition. However, such techniques are problematic in real world robotics applications because of the burden of collecting and labeling training images. We present a framework by which we can direct a robot to acquire domain-relevant data with little human effort. This framework is situated in a lifelong learning paradigm by which the robot can be more intelligent about how it collects and stores data over time. By iteratively training only on image views that increase classifier performance, our approach is able to collect representative views of objects with fewer data requirements for longterm storage of datasets. We show that our approach for acquiring domain-relevant data leads to a significant improvement in classification performance on in-domain objects compared to using available pre-constructed datasets. Additionally, our iterative view sampling method is able to find a good balance between classifier performance and data storage constraints.
近年来,人们利用深度学习方法在物体识别方面取得了令人印象深刻的成果。然而,由于收集和标记训练图像的负担,这种技术在现实世界的机器人应用中存在问题。我们提出了一个框架,通过该框架,我们可以指导机器人以很少的人力来获取领域相关数据。该框架位于终身学习范式中,通过该范式,机器人可以随着时间的推移更加智能地收集和存储数据。通过只在图像视图上进行迭代训练来提高分类器的性能,我们的方法能够收集具有代表性的对象视图,并且对数据集的长期存储的数据需求更少。我们表明,与使用可用的预构建数据集相比,我们获取领域相关数据的方法显著提高了对领域内对象的分类性能。此外,我们的迭代视图采样方法能够在分类器性能和数据存储约束之间找到一个很好的平衡。
{"title":"Learning Object Classifiers with Limited Human Supervision on a Physical Robot","authors":"Christopher Eriksen, A. Nicolai, W. Smart","doi":"10.1109/IRC.2018.00060","DOIUrl":"https://doi.org/10.1109/IRC.2018.00060","url":null,"abstract":"In recent years, deep learning approaches have been leveraged to achieve impressive results in object recognition. However, such techniques are problematic in real world robotics applications because of the burden of collecting and labeling training images. We present a framework by which we can direct a robot to acquire domain-relevant data with little human effort. This framework is situated in a lifelong learning paradigm by which the robot can be more intelligent about how it collects and stores data over time. By iteratively training only on image views that increase classifier performance, our approach is able to collect representative views of objects with fewer data requirements for longterm storage of datasets. We show that our approach for acquiring domain-relevant data leads to a significant improvement in classification performance on in-domain objects compared to using available pre-constructed datasets. Additionally, our iterative view sampling method is able to find a good balance between classifier performance and data storage constraints.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116837188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards Safe Speed and Separation Monitoring in Human-Robot Collaboration with 3D-Time-of-Flight Cameras 基于3d飞行时间相机的人机协作安全速度和分离监测
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00042
Urban B. Himmelsbach, T. Wendt, Matthias Lai
Human-robot collaboration plays a strong role in industrial production processes. The ISO/TS 15066 defines four different methods of collaboration between humans and robots. So far, there was no robotic system available that incorporates all four collaboration methods at once. Especially for the speed and separation monitoring, there was no sensor system available that can easily be attached directly to an off-the-shelf industrial robot arm and that is capable of detecting obstacles in distances from a few millimeters up to five meters. This paper presented first results of using a 3D time-of-flight camera directly on an industrial robot arm for obstacle detection in human-robot collaboration. We attached a Visionary-T camera from SICK to the flange of a KUKA LBR iiwa 7 R800. With Matlab, we evaluated the pictures and found that it works very well for detecting obstacles in a distance range starting from 0.5 m and up to 5 m.
人机协作在工业生产过程中发挥着重要作用。ISO/TS 15066定义了人类和机器人之间四种不同的协作方法。到目前为止,还没有一种机器人系统可以同时结合所有四种协作方法。特别是对于速度和分离监测,没有传感器系统可以很容易地直接连接到一个现成的工业机器人手臂,能够检测距离从几毫米到五米的障碍物。本文介绍了在工业机械臂上直接使用三维飞行时间相机进行人机协作障碍检测的初步结果。我们将SICK的visioni - t相机连接到KUKA LBR iiwa 7 R800的法兰上。我们使用Matlab对图片进行了评估,发现它可以很好地检测距离范围从0.5 m到5 m的障碍物。
{"title":"Towards Safe Speed and Separation Monitoring in Human-Robot Collaboration with 3D-Time-of-Flight Cameras","authors":"Urban B. Himmelsbach, T. Wendt, Matthias Lai","doi":"10.1109/IRC.2018.00042","DOIUrl":"https://doi.org/10.1109/IRC.2018.00042","url":null,"abstract":"Human-robot collaboration plays a strong role in industrial production processes. The ISO/TS 15066 defines four different methods of collaboration between humans and robots. So far, there was no robotic system available that incorporates all four collaboration methods at once. Especially for the speed and separation monitoring, there was no sensor system available that can easily be attached directly to an off-the-shelf industrial robot arm and that is capable of detecting obstacles in distances from a few millimeters up to five meters. This paper presented first results of using a 3D time-of-flight camera directly on an industrial robot arm for obstacle detection in human-robot collaboration. We attached a Visionary-T camera from SICK to the flange of a KUKA LBR iiwa 7 R800. With Matlab, we evaluated the pictures and found that it works very well for detecting obstacles in a distance range starting from 0.5 m and up to 5 m.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130903765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
MoVEMo: A Structured Approach for Engineering Reward Functions MoVEMo:工程奖励函数的结构化方法
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00053
Piergiuseppe Mallozzi, Raúl Pardo, Vincent Duplessis, Patrizio Pelliccione, G. Schneider
Reinforcement learning (RL) is a machine learning technique that has been increasingly used in robotic systems. In reinforcement learning, instead of manually pre-program what action to take at each step, we convey the goal a software agent in terms of reward functions. The agent tries different actions in order to maximize a numerical value, i.e. the reward. A misspecified reward function can cause problems such as reward hacking, where the agent finds out ways that maximize the reward without achieving the intended goal. As RL agents become more general and autonomous, the design of reward functions that elicit the desired behaviour in the agent becomes more important and cumbersome. In this paper, we present a technique to formally express reward functions in a structured way; this stimulates a proper reward function design and as well enables the formal verification of it. We start by defining the reward function using state machines. In this way, we can statically check that the reward function satisfies certain properties, e.g., high-level requirements of the function to learn. Later we automatically generate a runtime monitor — which runs in parallel with the learning agent — that provides the rewards according to the definition of the state machine and based on the behaviour of the agent. We use the UPPAAL model checker to design the reward model and verify the TCTL properties that model high-level requirements of the reward function and LARVA to monitor and enforce the reward model to the RL agent at runtime.
强化学习(RL)是一种机器学习技术,在机器人系统中得到越来越多的应用。在强化学习中,我们不是手动预编程在每一步采取什么行动,而是根据奖励函数将目标传达给软件代理。代理尝试不同的行动以最大化数值,即奖励。错误指定的奖励功能可能会导致诸如奖励黑客之类的问题,即代理在没有实现预期目标的情况下找到最大化奖励的方法。随着强化学习代理变得更加通用和自主,在代理中引发期望行为的奖励函数的设计变得更加重要和繁琐。在本文中,我们提出了一种以结构化的方式正式表达奖励函数的技术;这激发了适当的奖励功能设计,并使其能够进行正式验证。我们首先使用状态机定义奖励函数。通过这种方式,我们可以静态地检查奖励函数是否满足某些属性,例如,该函数对学习的高级要求。之后,我们自动生成一个运行时监视器——它与学习代理并行运行——根据状态机的定义和代理的行为提供奖励。我们使用UPPAAL模型检查器来设计奖励模型,并验证TCTL属性,这些属性建模了奖励函数的高级需求,LARVA在运行时监控并强制RL代理执行奖励模型。
{"title":"MoVEMo: A Structured Approach for Engineering Reward Functions","authors":"Piergiuseppe Mallozzi, Raúl Pardo, Vincent Duplessis, Patrizio Pelliccione, G. Schneider","doi":"10.1109/IRC.2018.00053","DOIUrl":"https://doi.org/10.1109/IRC.2018.00053","url":null,"abstract":"Reinforcement learning (RL) is a machine learning technique that has been increasingly used in robotic systems. In reinforcement learning, instead of manually pre-program what action to take at each step, we convey the goal a software agent in terms of reward functions. The agent tries different actions in order to maximize a numerical value, i.e. the reward. A misspecified reward function can cause problems such as reward hacking, where the agent finds out ways that maximize the reward without achieving the intended goal. As RL agents become more general and autonomous, the design of reward functions that elicit the desired behaviour in the agent becomes more important and cumbersome. In this paper, we present a technique to formally express reward functions in a structured way; this stimulates a proper reward function design and as well enables the formal verification of it. We start by defining the reward function using state machines. In this way, we can statically check that the reward function satisfies certain properties, e.g., high-level requirements of the function to learn. Later we automatically generate a runtime monitor — which runs in parallel with the learning agent — that provides the rewards according to the definition of the state machine and based on the behaviour of the agent. We use the UPPAAL model checker to design the reward model and verify the TCTL properties that model high-level requirements of the reward function and LARVA to monitor and enforce the reward model to the RL agent at runtime.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127587617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reachset Conformance Testing of Human Arms with a Biomechanical Model 基于生物力学模型的人体手臂一致性测试
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00045
C. Stark, Aaron Pereira, M. Althoff
Guaranteeing safety in human-robot co-existence often requires a prediction of the volume that could be occupied by the human up to a future time, in order to avoid collisions. Such predictions should be simple and fast for real-time calculation and collision-checking, but account even for unexpected movement. We use a complex biomechanical model to search for extreme human movement, to validate such a prediction. Since the model has a large input space and highly nonlinear dynamics, we use an exploration algorithm based on RRTs to efficiently find the extreme movements. We find that the simple prediction encloses all arm positions found by the exploration algorithm, except where the biomechanical model does not account for collision between body tissue.
为了保证人机共存的安全,通常需要预测到未来一段时间人类可能占据的体积,以避免碰撞。这样的预测应该是简单和快速的实时计算和碰撞检查,但甚至考虑到意外的移动。我们使用一个复杂的生物力学模型来寻找极端的人体运动,以验证这样的预测。由于该模型具有较大的输入空间和高度非线性的动力学特性,我们采用了基于RRTs的探索算法来有效地找到极端运动。我们发现,简单的预测包含了探索算法找到的所有手臂位置,除了生物力学模型没有考虑身体组织之间的碰撞。
{"title":"Reachset Conformance Testing of Human Arms with a Biomechanical Model","authors":"C. Stark, Aaron Pereira, M. Althoff","doi":"10.1109/IRC.2018.00045","DOIUrl":"https://doi.org/10.1109/IRC.2018.00045","url":null,"abstract":"Guaranteeing safety in human-robot co-existence often requires a prediction of the volume that could be occupied by the human up to a future time, in order to avoid collisions. Such predictions should be simple and fast for real-time calculation and collision-checking, but account even for unexpected movement. We use a complex biomechanical model to search for extreme human movement, to validate such a prediction. Since the model has a large input space and highly nonlinear dynamics, we use an exploration algorithm based on RRTs to efficiently find the extreme movements. We find that the simple prediction encloses all arm positions found by the exploration algorithm, except where the biomechanical model does not account for collision between body tissue.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133048490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Implementation of Feeding Task via Learning from Demonstration 从示范中学习的喂养任务的实施
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00058
N. Ettehadi, A. Behal
In this paper, a Learning From Demonstration (LFD) approach is used to design an autonomous meal-assistant agent. The feeding task is modeled as a mixture of Gaussian distributions. Using the data collected via kinesthetic teaching, the parameters of Gaussian Mixture Model (GMM) are learned using Gaussian Mixture Regression (GMR) and Expectation Maximization (EM) algorithm. Reproduction of feeding trajectories for different environments is obtained by solving a constrained optimization problem. In this method we show that obstacles can be avoided by robot's end-effector by adding a set of extra constraints to the optimization problem. Finally, the performance of the designed meal assistant is evaluated in two feeding scenario experiments: one considering obstacles in the path between the bowl and the mouth and the other without.
本文采用从演示中学习(LFD)的方法来设计一个自主的助餐代理。投料任务是一个混合高斯分布模型。利用动觉教学收集的数据,利用高斯混合回归(GMR)和期望最大化(EM)算法学习高斯混合模型(GMM)的参数。通过求解约束优化问题,得到了不同环境下投料轨迹的再现。在此方法中,我们通过在优化问题中加入一组额外的约束来证明机器人末端执行器可以避开障碍物。最后,通过两种喂食场景实验对所设计的助餐器的性能进行了评估:一种是考虑碗与口之间路径上的障碍物,另一种是不考虑障碍物。
{"title":"Implementation of Feeding Task via Learning from Demonstration","authors":"N. Ettehadi, A. Behal","doi":"10.1109/IRC.2018.00058","DOIUrl":"https://doi.org/10.1109/IRC.2018.00058","url":null,"abstract":"In this paper, a Learning From Demonstration (LFD) approach is used to design an autonomous meal-assistant agent. The feeding task is modeled as a mixture of Gaussian distributions. Using the data collected via kinesthetic teaching, the parameters of Gaussian Mixture Model (GMM) are learned using Gaussian Mixture Regression (GMR) and Expectation Maximization (EM) algorithm. Reproduction of feeding trajectories for different environments is obtained by solving a constrained optimization problem. In this method we show that obstacles can be avoided by robot's end-effector by adding a set of extra constraints to the optimization problem. Finally, the performance of the designed meal assistant is evaluated in two feeding scenario experiments: one considering obstacles in the path between the bowl and the mouth and the other without.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133374509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Human Object Identification for Human-Robot Interaction by Using Fast R-CNN 基于Fast R-CNN的人机交互目标识别
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00043
Shih-Chung Hsu, Yu-Wen Wang, Chung-Lin Huang
This paper proposes a human object identification by using a simplified fast region-based convolutional network (R-CNN). Human identification is a problem of considerable practical interest. Here, we propose the state-of-the art method which is tested for major pedestrian datasets. Human detection consists of the body part detectors which detect head and shoulder, torso, and pair of legs, with three, two and four different appearances respectively. These detectors are integrated as to identify the human object with different poses. Fast R-CNN is a well-known method for object recognition using deep CNN. Hybrid body part detector demonstrates the merits for partially occluded human detection by integrating the scores of the individual part detectors based on the occlusion map. The highest merging score is the best configuration to evaluate the detection score of the human detector. Experiments on two public datasets (INRIA and Caltech) show the effectiveness of the proposed approach.
本文提出了一种基于简化快速区域卷积网络(R-CNN)的人体目标识别方法。人类身份识别是一个具有相当实际意义的问题。在这里,我们提出了最先进的方法,并对主要的行人数据集进行了测试。人体检测由身体部位检测器组成,分别检测头肩、躯干和一对腿,分别有三种、两种和四种不同的外观。这些探测器集成在一起,以识别不同姿势的人体物体。Fast R-CNN是一种使用深度CNN进行对象识别的知名方法。混合身体部位检测器通过在遮挡图的基础上对各个部位检测器的分数进行积分,展示了局部遮挡人体检测的优点。最高的合并分数是评价人工检测器检测分数的最佳配置。在两个公共数据集(INRIA和Caltech)上的实验表明了该方法的有效性。
{"title":"Human Object Identification for Human-Robot Interaction by Using Fast R-CNN","authors":"Shih-Chung Hsu, Yu-Wen Wang, Chung-Lin Huang","doi":"10.1109/IRC.2018.00043","DOIUrl":"https://doi.org/10.1109/IRC.2018.00043","url":null,"abstract":"This paper proposes a human object identification by using a simplified fast region-based convolutional network (R-CNN). Human identification is a problem of considerable practical interest. Here, we propose the state-of-the art method which is tested for major pedestrian datasets. Human detection consists of the body part detectors which detect head and shoulder, torso, and pair of legs, with three, two and four different appearances respectively. These detectors are integrated as to identify the human object with different poses. Fast R-CNN is a well-known method for object recognition using deep CNN. Hybrid body part detector demonstrates the merits for partially occluded human detection by integrating the scores of the individual part detectors based on the occlusion map. The highest merging score is the best configuration to evaluate the detection score of the human detector. Experiments on two public datasets (INRIA and Caltech) show the effectiveness of the proposed approach.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115496343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Estimating the Operation of Unknown Appliances for Service Robots Using CNN and Ontology 基于CNN和本体的服务机器人未知器具运行估计
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00039
G. A. G. Ricardez, Yosuke Osaki, Ming Ding, J. Takamatsu, T. Ogasawara
We can expect robots to efficiently perform tasks using appliances in a similar way that humans do. A common approach is to build appliances' models so that robot can operate them but this process is time-consuming. In this paper, we propose a method to estimate the proper operation of appliances using ontology and convolutional neural networks (CNN). We propose to use CNNs to detect the appliances and the operating parts, and then perform an ontology analysis of the operating parts (e.g., buttons) and the appliances to infer the proper operation. This method can be used for appliances which it was not trained for because the dataset has a high generalization due to the inclusion of multiple appliances and the separated training for appliances and operating parts. We experimentally verify the effectiveness of the proposed method with a service robot operating in multi-object environments.
我们可以期待机器人像人类一样高效地完成使用电器的任务。一种常见的方法是建立家电模型,这样机器人就可以操作它们,但这个过程很耗时。本文提出了一种利用本体和卷积神经网络(CNN)来估计设备是否正常运行的方法。我们建议使用cnn对器具和操作部件进行检测,然后对操作部件(如按钮)和器具进行本体分析,从而推断出正确的操作。该方法可以用于未训练的器具,因为数据集包含了多个器具,并且器具和操作部件的训练是分开的,因此具有很高的泛化性。通过多目标环境下的服务机器人实验验证了该方法的有效性。
{"title":"Estimating the Operation of Unknown Appliances for Service Robots Using CNN and Ontology","authors":"G. A. G. Ricardez, Yosuke Osaki, Ming Ding, J. Takamatsu, T. Ogasawara","doi":"10.1109/IRC.2018.00039","DOIUrl":"https://doi.org/10.1109/IRC.2018.00039","url":null,"abstract":"We can expect robots to efficiently perform tasks using appliances in a similar way that humans do. A common approach is to build appliances' models so that robot can operate them but this process is time-consuming. In this paper, we propose a method to estimate the proper operation of appliances using ontology and convolutional neural networks (CNN). We propose to use CNNs to detect the appliances and the operating parts, and then perform an ontology analysis of the operating parts (e.g., buttons) and the appliances to infer the proper operation. This method can be used for appliances which it was not trained for because the dataset has a high generalization due to the inclusion of multiple appliances and the separated training for appliances and operating parts. We experimentally verify the effectiveness of the proposed method with a service robot operating in multi-object environments.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121987251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An Integrated System for Gait Analysis Using FSRs and an IMU 基于fsr和IMU的步态分析集成系统
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00073
Harin Kim, Yeon Kang, David R. Valencia, Donghan Kim
In this paper, we developed a system to analyze gait patterns by integrating insole type FSR sensors and IMU sensors. Using this, an experiment was conducted to analyze the walking pattern of the pedestrian, and the reliability of the developed system was verified. The developed system extracts six data (roll, pith, yaw, foot height, foot movement distance, weight on FSR sensors) from each sensor. These data can be used to calculate stride lengths and step lengths which is important when analyzing pedestrian walking patterns. Experiments to verify the developed gait system determine the reliability based on the calculated data with the unit stride (0.5m). As a result, it was confirmed that the step length had an error range of ± 7.17% and the stride length had an error range of ± 6.71%.
在本文中,我们开发了一个集成鞋垫式FSR传感器和IMU传感器的步态模式分析系统。在此基础上,对行人的行走模式进行了实验分析,验证了所开发系统的可靠性。开发的系统从每个传感器提取6个数据(滚转、俯仰、偏航、脚高、脚移动距离、FSR传感器上的重量)。这些数据可以用来计算步幅和步长,这在分析行人行走模式时很重要。通过实验验证所开发的步态系统,以单位步幅(0.5m)计算数据,确定步态系统的可靠性。结果表明,步长误差范围为±7.17%,步长误差范围为±6.71%。
{"title":"An Integrated System for Gait Analysis Using FSRs and an IMU","authors":"Harin Kim, Yeon Kang, David R. Valencia, Donghan Kim","doi":"10.1109/IRC.2018.00073","DOIUrl":"https://doi.org/10.1109/IRC.2018.00073","url":null,"abstract":"In this paper, we developed a system to analyze gait patterns by integrating insole type FSR sensors and IMU sensors. Using this, an experiment was conducted to analyze the walking pattern of the pedestrian, and the reliability of the developed system was verified. The developed system extracts six data (roll, pith, yaw, foot height, foot movement distance, weight on FSR sensors) from each sensor. These data can be used to calculate stride lengths and step lengths which is important when analyzing pedestrian walking patterns. Experiments to verify the developed gait system determine the reliability based on the calculated data with the unit stride (0.5m). As a result, it was confirmed that the step length had an error range of ± 7.17% and the stride length had an error range of ± 6.71%.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124152634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Comparison of Constant Curvature Forward Kinematics for Multisection Continuum Manipulators 多截面连续体机械臂常曲率正运动学比较
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00046
Anant Chawla, Chase G. Frazelle, I. Walker
Over the past few years, modeling of continuum robots has been the subject of considerable attention in the research community. In this paper, we compare a set of forward kinematic models developed for continuum robots, with the underlying assumption of piecewise constant curvature. A new approximate kinematic model based on phase and actuator length differences is also introduced for comparison. The comparative evaluation consists of computer simulation and physical experiments on a multisection continuum robotic manipulator, the OctArm. The experiments include both elongation and bending in 3D space. The comparative accuracy of the models is reported, along with relative numerical stability. Further conclusions are drawn on the applicability of the models to different real-world scenarios.
在过去的几年中,连续体机器人的建模一直是研究界相当关注的主题。在本文中,我们比较了一组连续体机器人的正运动学模型,其基本假设是分段常曲率。为了便于比较,还提出了一种新的基于相位和驱动器长度差异的近似运动学模型。通过计算机仿真和物理实验对多截面连续机械臂OctArm进行了对比评价。实验包括三维空间的伸长率和弯曲率。报告了这些模型的相对精度和相对数值稳定性。进一步的结论是关于模型对不同现实世界情景的适用性。
{"title":"A Comparison of Constant Curvature Forward Kinematics for Multisection Continuum Manipulators","authors":"Anant Chawla, Chase G. Frazelle, I. Walker","doi":"10.1109/IRC.2018.00046","DOIUrl":"https://doi.org/10.1109/IRC.2018.00046","url":null,"abstract":"Over the past few years, modeling of continuum robots has been the subject of considerable attention in the research community. In this paper, we compare a set of forward kinematic models developed for continuum robots, with the underlying assumption of piecewise constant curvature. A new approximate kinematic model based on phase and actuator length differences is also introduced for comparison. The comparative evaluation consists of computer simulation and physical experiments on a multisection continuum robotic manipulator, the OctArm. The experiments include both elongation and bending in 3D space. The comparative accuracy of the models is reported, along with relative numerical stability. Further conclusions are drawn on the applicability of the models to different real-world scenarios.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128240261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Simulation Study of Autonomous Drive for Active Capsule Endoscopy 主动胶囊内窥镜自动驾驶仿真研究
Pub Date : 1900-01-01 DOI: 10.1109/IRC.2018.00083
Hyeon Cho, Tae Jin Kim, Jae Hong Lee, H. Kim, Jong-Oh Park, Jong Hee Lee, Cheong Lee, Y. Son
Recently, external electromagnetic actuation(EMA) system was introduced to control the locomotion of the capsule endoscopy(CE) using magnetic force. EMA system provides the manual user interface to control the system, but inspectors suffered from fatigues due to the long examination time. We proposed an autonomous driving algorithm for the capsule endoscope. The algorithm searched the target point based on the image processing where the capsule should orient, and the steering was automatically manipulated until the capsule oriented the target point. Then, the propulsion was made until the capsule deviated from the target point. In order to verify the feasibility of the algorithm, simulated endoscopic images were acquired from the commercially available endoscopic capsule by using intestine phantom and a linear and rotation motion stage. The driving simulator was tested on the arc-shaped paths having the various curvatures under the various propulsion forces. In the most conditions, the proposed algorithm succeeded in driving autonomously in the given paths. In some conditions, having a large curvature and a large propulsion, the target point was missed, but scanning algorithm for the missed target point may overcome this problem. In conclusion, the proposed algorithm could be utilized in the active capsule endoscope system and provide the autonomous driving mode in the capsule endoscopy without additional sensors or devices.
近年来,引入外部电磁驱动(EMA)系统,利用磁力控制胶囊内窥镜(CE)的运动。EMA系统提供手动用户界面来控制系统,但由于检查时间长,检查员容易疲劳。提出了一种胶囊内窥镜的自动驾驶算法。该算法在图像处理的基础上搜索出胶囊定位的目标点,并自动进行转向操作,直到胶囊定位到目标点。然后继续推进,直到太空舱偏离目标点。为了验证该算法的可行性,利用肠幻影和线性旋转运动平台从市售的内镜胶囊中获得模拟内镜图像。在不同推力作用下,对具有不同曲率的弧形路径进行了仿真试验。在大多数情况下,该算法在给定的路径上成功地实现了自动驾驶。在某些曲率大、推力大的情况下,目标点会丢失,而丢失目标点的扫描算法可以克服这一问题。综上所述,该算法可用于主动式胶囊内窥镜系统,在不需要额外传感器或设备的情况下提供胶囊内窥镜的自动驾驶模式。
{"title":"Simulation Study of Autonomous Drive for Active Capsule Endoscopy","authors":"Hyeon Cho, Tae Jin Kim, Jae Hong Lee, H. Kim, Jong-Oh Park, Jong Hee Lee, Cheong Lee, Y. Son","doi":"10.1109/IRC.2018.00083","DOIUrl":"https://doi.org/10.1109/IRC.2018.00083","url":null,"abstract":"Recently, external electromagnetic actuation(EMA) system was introduced to control the locomotion of the capsule endoscopy(CE) using magnetic force. EMA system provides the manual user interface to control the system, but inspectors suffered from fatigues due to the long examination time. We proposed an autonomous driving algorithm for the capsule endoscope. The algorithm searched the target point based on the image processing where the capsule should orient, and the steering was automatically manipulated until the capsule oriented the target point. Then, the propulsion was made until the capsule deviated from the target point. In order to verify the feasibility of the algorithm, simulated endoscopic images were acquired from the commercially available endoscopic capsule by using intestine phantom and a linear and rotation motion stage. The driving simulator was tested on the arc-shaped paths having the various curvatures under the various propulsion forces. In the most conditions, the proposed algorithm succeeded in driving autonomously in the given paths. In some conditions, having a large curvature and a large propulsion, the target point was missed, but scanning algorithm for the missed target point may overcome this problem. In conclusion, the proposed algorithm could be utilized in the active capsule endoscope system and provide the autonomous driving mode in the capsule endoscopy without additional sensors or devices.","PeriodicalId":416113,"journal":{"name":"2018 Second IEEE International Conference on Robotic Computing (IRC)","volume":"118 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120904455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2018 Second IEEE International Conference on Robotic Computing (IRC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1