Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144879
Thomas Wright, Toshihide Hanari, K. Kawabata, B. Lennox
In exploratory robotics for nuclear decommissioning, environmental understanding is key. Sites such as Fukushima Daiichi Power Station and Sellafield often use manually controlled or semi-autonomous vehicles for exploration and monitoring of assets. In many cases, robots have limited sensing capabilities such as a single camera to provide video to the operators. These limitations can cause issues, where a lack of data about the environment and limited understanding of depth within the image can lead to a mis-understanding of asset state or potential damage being caused to the robot or environment. This work aims to aid operators by using the limited sensors provided i.e. a single monocular camera, to allow estimates of the robot’s surrounding environments to be generated insitu without having to off load large amounts of data for processing. This information can then be displayed as a mesh and manipulated in 3D to improve the operator awareness. Due to the target environment for operation being radioactive, speed is prioritised over accuracy, due to the damaging effects radiation can cause to electronics. In well lit environments images can be overlaid onto the meshes to improve the operators understanding and add detail to the mesh. From the results it has been found that 3D meshes of an environment/object can be generated in an acceptable time frame, less than 5 minutes. This differs from many current methods which require offline processing due to heavy computational requirement of Photogrammetry, or are far less informative giving data as raw point clouds, which can be hard to interpret. The proposed technique allows for lower resolution meshes good enough for avoiding collisions within an environment to be generated during a mission due to it’s speed of generation, however there are still several issues which need to be solved before such a technique is ready for deployment.
在核退役的探索性机器人中,环境理解是关键。福岛第一核电站(Fukushima Daiichi Power Station)和塞拉菲尔德(Sellafield)等核电站经常使用人工控制或半自动车辆进行勘探和监测。在许多情况下,机器人具有有限的传感能力,例如单个摄像机向操作员提供视频。这些限制可能会导致问题,其中缺乏关于环境的数据和对图像深度的有限理解可能导致对资产状态的错误理解或对机器人或环境造成的潜在损害。这项工作的目的是通过使用有限的传感器(即单个单目摄像机)来帮助操作员,以便在不需要卸载大量数据进行处理的情况下,原位生成机器人周围环境的估计。然后,这些信息可以显示为网格,并在3D中进行操作,以提高操作员的意识。由于操作的目标环境是放射性的,速度优先于准确性,因为辐射会对电子设备造成破坏性影响。在光照良好的环境中,图像可以叠加到网格上,以提高操作员的理解能力,并为网格添加细节。从结果来看,环境/对象的3D网格可以在可接受的时间框架内生成,少于5分钟。这与当前许多方法不同,这些方法由于摄影测量的大量计算需求而需要离线处理,或者作为原始点云提供的数据信息量远远不足,难以解释。由于生成速度快,所提出的技术允许较低分辨率的网格,以避免在任务期间生成的环境中发生碰撞,然而,在这种技术准备部署之前,仍有几个问题需要解决。
{"title":"Fast In-situ Mesh Generation using Orb-SLAM2 and OpenMVS","authors":"Thomas Wright, Toshihide Hanari, K. Kawabata, B. Lennox","doi":"10.1109/UR49135.2020.9144879","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144879","url":null,"abstract":"In exploratory robotics for nuclear decommissioning, environmental understanding is key. Sites such as Fukushima Daiichi Power Station and Sellafield often use manually controlled or semi-autonomous vehicles for exploration and monitoring of assets. In many cases, robots have limited sensing capabilities such as a single camera to provide video to the operators. These limitations can cause issues, where a lack of data about the environment and limited understanding of depth within the image can lead to a mis-understanding of asset state or potential damage being caused to the robot or environment. This work aims to aid operators by using the limited sensors provided i.e. a single monocular camera, to allow estimates of the robot’s surrounding environments to be generated insitu without having to off load large amounts of data for processing. This information can then be displayed as a mesh and manipulated in 3D to improve the operator awareness. Due to the target environment for operation being radioactive, speed is prioritised over accuracy, due to the damaging effects radiation can cause to electronics. In well lit environments images can be overlaid onto the meshes to improve the operators understanding and add detail to the mesh. From the results it has been found that 3D meshes of an environment/object can be generated in an acceptable time frame, less than 5 minutes. This differs from many current methods which require offline processing due to heavy computational requirement of Photogrammetry, or are far less informative giving data as raw point clouds, which can be hard to interpret. The proposed technique allows for lower resolution meshes good enough for avoiding collisions within an environment to be generated during a mission due to it’s speed of generation, however there are still several issues which need to be solved before such a technique is ready for deployment.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129777455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144917
John D. Kelly, D. Lofaro, D. Sofge
Our work focuses on persistent area coverage using a large number of agents. This is a valuable capability for multi-agent and swarm-based systems. Specifically, we strive to effectively disperse the agents throughout an area of interest such that it is sufficiently and persistently covered by the sensing sweeps of the agents. This capability can be applied toward tasks such as surveillance, target tracking, search and rescue, and exploration of unknown areas. Many methods can be implemented as behaviors for the agents to accomplish this. One strategy involves measuring area coverage using a measure known as deployment entropy, which relies on the area being divided into regions. Deployment entropy expresses the coverage of the area as the uniformity of agents per region across all regions. This strategy is useful due to its low computational complexity, scalability, and potential implementation on decentralized systems. Though previous results are promising, they focus on instantaneous area coverage and are not persistent. It is proposed in this paper that combining the split region strategy with the implementation of potential fields can retain the benefits of the split region strategy while increasing the spread of agents and therefore the total area that is persistently covered by the agents’ sensors. This approach is implemented and demonstrated to be effective through simulations of various numbers and densities of agents. Ultimately, these studies showed that a greater spread of agents and increased sensor coverage is obtained when compared to previous results not using potential fields with deployment entropy.
{"title":"Persistent Area Coverage for Swarms Utilizing Deployment Entropy with Potential Fields","authors":"John D. Kelly, D. Lofaro, D. Sofge","doi":"10.1109/UR49135.2020.9144917","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144917","url":null,"abstract":"Our work focuses on persistent area coverage using a large number of agents. This is a valuable capability for multi-agent and swarm-based systems. Specifically, we strive to effectively disperse the agents throughout an area of interest such that it is sufficiently and persistently covered by the sensing sweeps of the agents. This capability can be applied toward tasks such as surveillance, target tracking, search and rescue, and exploration of unknown areas. Many methods can be implemented as behaviors for the agents to accomplish this. One strategy involves measuring area coverage using a measure known as deployment entropy, which relies on the area being divided into regions. Deployment entropy expresses the coverage of the area as the uniformity of agents per region across all regions. This strategy is useful due to its low computational complexity, scalability, and potential implementation on decentralized systems. Though previous results are promising, they focus on instantaneous area coverage and are not persistent. It is proposed in this paper that combining the split region strategy with the implementation of potential fields can retain the benefits of the split region strategy while increasing the spread of agents and therefore the total area that is persistently covered by the agents’ sensors. This approach is implemented and demonstrated to be effective through simulations of various numbers and densities of agents. Ultimately, these studies showed that a greater spread of agents and increased sensor coverage is obtained when compared to previous results not using potential fields with deployment entropy.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130291645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144767
Wen-Chung Chang, Van-Toan Pham, Yang-Cheng Huang
3-D point cloud registration appears to be one of the principal techniques to estimate object pose in 3-D space and is critical to object picking and assembly in automated manufacturing lines. Thereby, this paper proposes an effective registration architecture with the aim of estimating the transformation between a data point cloud and the model point cloud. Specifically, in the first registration stage, a trainable Convolutional Neural Network (CNN) model is developed to learn the pose estimation between two point clouds in the case of a full range of orientation from −180° to 180°. In order to generate the training data set, a descriptor is proposed to extract features which are employed to train the CNN model from point clouds. Then, based on the rough estimation of the trained CNN model in the first stage, two point clouds can be further aligned precisely in the second stage by using the Iterative Closest Point (ICP) algorithm. Finally, the performance of the proposed two-stage registration architecture has been verified by experiments in comparison with a baseline method. The experimental results illustrate that the developed algorithm can guarantee high precision while significantly reducing the estimation time.
{"title":"A Fusion of CNNs and ICP for 3-D Point Cloud Registration*","authors":"Wen-Chung Chang, Van-Toan Pham, Yang-Cheng Huang","doi":"10.1109/UR49135.2020.9144767","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144767","url":null,"abstract":"3-D point cloud registration appears to be one of the principal techniques to estimate object pose in 3-D space and is critical to object picking and assembly in automated manufacturing lines. Thereby, this paper proposes an effective registration architecture with the aim of estimating the transformation between a data point cloud and the model point cloud. Specifically, in the first registration stage, a trainable Convolutional Neural Network (CNN) model is developed to learn the pose estimation between two point clouds in the case of a full range of orientation from −180° to 180°. In order to generate the training data set, a descriptor is proposed to extract features which are employed to train the CNN model from point clouds. Then, based on the rough estimation of the trained CNN model in the first stage, two point clouds can be further aligned precisely in the second stage by using the Iterative Closest Point (ICP) algorithm. Finally, the performance of the proposed two-stage registration architecture has been verified by experiments in comparison with a baseline method. The experimental results illustrate that the developed algorithm can guarantee high precision while significantly reducing the estimation time.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130761566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144925
L. A. Nguyen, T. Pham, T. Ngo, Xuan-Tung Truong
This paper proposes a proactive trajectory planning algorithm for autonomous mobile robots in dynamic social environments. The main idea of the proposed proactive timed elastic band (PTEB) system is to combine the advantages of the timed elastic band (TEB) technique and the hybrid reciprocal velocity obstacle (HRVO) model by incorporating the potential collision generated by the HRVO model into the objective function of the TEB technique. The output of the proposed PTEB system is the optimal trajectory, which enables the mobile robots to navigate safely in the dynamic social environments. We validate the effectiveness of the proposed model through a series of experiments in simulation environments. The simulation results show that, our proposed motion model is capable of driving the mobile robots to proactively avoid dynamic obstacles, providing the safe navigation for the robots.
{"title":"A Proactive Trajectory Planning Algorithm for Autonomous Mobile Robots in Dynamic Social Environments","authors":"L. A. Nguyen, T. Pham, T. Ngo, Xuan-Tung Truong","doi":"10.1109/UR49135.2020.9144925","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144925","url":null,"abstract":"This paper proposes a proactive trajectory planning algorithm for autonomous mobile robots in dynamic social environments. The main idea of the proposed proactive timed elastic band (PTEB) system is to combine the advantages of the timed elastic band (TEB) technique and the hybrid reciprocal velocity obstacle (HRVO) model by incorporating the potential collision generated by the HRVO model into the objective function of the TEB technique. The output of the proposed PTEB system is the optimal trajectory, which enables the mobile robots to navigate safely in the dynamic social environments. We validate the effectiveness of the proposed model through a series of experiments in simulation environments. The simulation results show that, our proposed motion model is capable of driving the mobile robots to proactively avoid dynamic obstacles, providing the safe navigation for the robots.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134075089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144803
June-sup Yi, M. Ahn, Hosik Chae, Hyunwoo Nam, Donghun Noh, D. Hong, H. Moon
This work proposes a task scheduling method in an optimization framework with applications on a dual-arm cooking robot in a controlled cooking environment. A mixed-integer programming (MIP) framework is used to find an optimal sequence of tasks to be done for each arm. The optimization is fast and simple as a priori information about the tasks to be scheduled reveal dependency and kinematic constraints between them which significantly reduces the problem size as infeasible solutions are removed pre-optimization. The optimization approach’s feasibility is validated on a series of simulations and an in-depth scalability analysis is conducted by changing the number of tasks to be done, the dishes to be completed, as well as the locations where the tasks can be done. Considering the unique configuration of the platform, analysis on selecting the minimum time required tasks as opposed tasks that will give the most flexibility to the other arm is also done. An example is presented on a real set of tasks to show the optimality of the solution.
{"title":"Task Planning with Mixed-Integer Programming for Multiple Cooking Task Using dual-arm Robot","authors":"June-sup Yi, M. Ahn, Hosik Chae, Hyunwoo Nam, Donghun Noh, D. Hong, H. Moon","doi":"10.1109/UR49135.2020.9144803","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144803","url":null,"abstract":"This work proposes a task scheduling method in an optimization framework with applications on a dual-arm cooking robot in a controlled cooking environment. A mixed-integer programming (MIP) framework is used to find an optimal sequence of tasks to be done for each arm. The optimization is fast and simple as a priori information about the tasks to be scheduled reveal dependency and kinematic constraints between them which significantly reduces the problem size as infeasible solutions are removed pre-optimization. The optimization approach’s feasibility is validated on a series of simulations and an in-depth scalability analysis is conducted by changing the number of tasks to be done, the dishes to be completed, as well as the locations where the tasks can be done. Considering the unique configuration of the platform, analysis on selecting the minimum time required tasks as opposed tasks that will give the most flexibility to the other arm is also done. An example is presented on a real set of tasks to show the optimality of the solution.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116028421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144805
Jeongwon Lee, W. Han, Eunchan Kim, Ingu Choi, Sungwook Yang
This paper presents a new type of a robotic palm based on a granular jamming mechanism to improve grasping performance. The granular jamming principle is adopted to alter the shape and stiffness of the robotic palm by controlling a transition between a solid-state and a fluid-state of a granular material used. The robotic palm incorporates a specifically designed granular chamber that is optimized for dealing with large volume change. The control system is also developed for the proposed granular jamming mechanism to be electrically operated without any pneumatic components. In addition, the stiffness of the palm can be precisely regulated by the feedback control of the negative pressure applied to the granular chamber. We evaluate the shape-adaptability of the robotic palm for various objects. As a result, the robotic palm could accommodate the various shapes of the testing objects by conformably altering its shape during contact. Moreover, the stiffness-controllability is also investigated for the three different sizes of granular materials. The stiffness increases up to 30 fold under fully jammed state for the small size of the grain. Finally, we evaluate the grasping performance of the robotic palm with a commercially available robot hand. 1.7 times higher grasping force was attained with the conformably deformed and stiffened surface, compared to the flat skin of the rigid palm. Therefore, the stiffness-controlled robotic palm can improve grasping performance with the enhanced shape-adaptability and stiffness-controllability.
{"title":"A Stiffness-controlled Robotic Palm based on a Granular Jamming Mechanism","authors":"Jeongwon Lee, W. Han, Eunchan Kim, Ingu Choi, Sungwook Yang","doi":"10.1109/UR49135.2020.9144805","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144805","url":null,"abstract":"This paper presents a new type of a robotic palm based on a granular jamming mechanism to improve grasping performance. The granular jamming principle is adopted to alter the shape and stiffness of the robotic palm by controlling a transition between a solid-state and a fluid-state of a granular material used. The robotic palm incorporates a specifically designed granular chamber that is optimized for dealing with large volume change. The control system is also developed for the proposed granular jamming mechanism to be electrically operated without any pneumatic components. In addition, the stiffness of the palm can be precisely regulated by the feedback control of the negative pressure applied to the granular chamber. We evaluate the shape-adaptability of the robotic palm for various objects. As a result, the robotic palm could accommodate the various shapes of the testing objects by conformably altering its shape during contact. Moreover, the stiffness-controllability is also investigated for the three different sizes of granular materials. The stiffness increases up to 30 fold under fully jammed state for the small size of the grain. Finally, we evaluate the grasping performance of the robotic palm with a commercially available robot hand. 1.7 times higher grasping force was attained with the conformably deformed and stiffened surface, compared to the flat skin of the rigid palm. Therefore, the stiffness-controlled robotic palm can improve grasping performance with the enhanced shape-adaptability and stiffness-controllability.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125482192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144978
Yusei Oozono, H. Yamazoe, Joo-Ho Lee
In this paper, we propose a pointing-directionestimation method using a body-mounted camera. The opportunities to capture a large amount of image data in daily life are increasing due to the spread of smartphones and wearable cameras. In order to efficiently look back at the captured images, we aim to extract attention targets from the image sequences because the attention target is important for reminding people of their memories. Toward this purpose, in this paper, we propose a method for estimating the pointing direction from wearable camera images. The proposed method consists of two steps: arm skeleton estimation and pointing direction estimation. We model three types of pointing-directionestimation models and compare the estimations’ accuracy for evaluating which parts are important for pointing direction estimation. The experimental results show that the model based on the wrists and elbows had the best results.
{"title":"Pointing Direction Estimation for Attention Target Extraction Using Body-mounted Camera","authors":"Yusei Oozono, H. Yamazoe, Joo-Ho Lee","doi":"10.1109/UR49135.2020.9144978","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144978","url":null,"abstract":"In this paper, we propose a pointing-directionestimation method using a body-mounted camera. The opportunities to capture a large amount of image data in daily life are increasing due to the spread of smartphones and wearable cameras. In order to efficiently look back at the captured images, we aim to extract attention targets from the image sequences because the attention target is important for reminding people of their memories. Toward this purpose, in this paper, we propose a method for estimating the pointing direction from wearable camera images. The proposed method consists of two steps: arm skeleton estimation and pointing direction estimation. We model three types of pointing-directionestimation models and compare the estimations’ accuracy for evaluating which parts are important for pointing direction estimation. The experimental results show that the model based on the wrists and elbows had the best results.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127043946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144702
Chanwoong Kwak, Jaeyoon Jang, Hosub Yoon
Facial landmark localization is essential for robot-human interaction. In particular, the human eye is more important because it can grasp a person’s interests. However, the traditional method does not consider eye changes from the dataset, so the limitation is clear, this paper presents a data augmentation method for acquiring various eye images and a method for creating a robust eye landmark model with 2-stage training. Experiments on augmented 300W-LP datasets show that our method outperforms performance than the previous method.
{"title":"Facial Landmark Localization Robust on the Eyes with Position Regression Network","authors":"Chanwoong Kwak, Jaeyoon Jang, Hosub Yoon","doi":"10.1109/UR49135.2020.9144702","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144702","url":null,"abstract":"Facial landmark localization is essential for robot-human interaction. In particular, the human eye is more important because it can grasp a person’s interests. However, the traditional method does not consider eye changes from the dataset, so the limitation is clear, this paper presents a data augmentation method for acquiring various eye images and a method for creating a robust eye landmark model with 2-stage training. Experiments on augmented 300W-LP datasets show that our method outperforms performance than the previous method.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124372444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144836
Carlos Menacho, Jhon Ordoñez
Fall accidents are serious events that need to be addressed. Generally, elderly people could suffer these accidents that may lead injures or even death. The use of Convolutional Neural Networks (CNN) has achieved the state of the art for fall detection, but it requires a high computational cost. In this work, we propose an efficient CNN architecture with a reduced number of parameters, which is applied to fall detection in a service with a mobile robot, equipped with a resource-constrained hardware (Nvidia Jetson TX2 platform). Also, different pre-trained CNN models are compared to measure their performances in real scenarios, in addition with other functions like following people and navigation. Furthermore, fall detection is carried out by extraction of temporal features obtained with an Optical Flow extraction from two consecutive RGB images. The proposed network is confirmed by our results to be faster and more suitable for running on resource-constrained Hardware. Our model achieves 88.55% of accuracy using the proposed architecture and it works at 23.16 FPS on GPU and 10.23 FPS on CPU.
{"title":"Fall detection based on CNN models implemented on a mobile robot","authors":"Carlos Menacho, Jhon Ordoñez","doi":"10.1109/UR49135.2020.9144836","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144836","url":null,"abstract":"Fall accidents are serious events that need to be addressed. Generally, elderly people could suffer these accidents that may lead injures or even death. The use of Convolutional Neural Networks (CNN) has achieved the state of the art for fall detection, but it requires a high computational cost. In this work, we propose an efficient CNN architecture with a reduced number of parameters, which is applied to fall detection in a service with a mobile robot, equipped with a resource-constrained hardware (Nvidia Jetson TX2 platform). Also, different pre-trained CNN models are compared to measure their performances in real scenarios, in addition with other functions like following people and navigation. Furthermore, fall detection is carried out by extraction of temporal features obtained with an Optical Flow extraction from two consecutive RGB images. The proposed network is confirmed by our results to be faster and more suitable for running on resource-constrained Hardware. Our model achieves 88.55% of accuracy using the proposed architecture and it works at 23.16 FPS on GPU and 10.23 FPS on CPU.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116078203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144922
Qiwen Shao, Ningbin Zhang, Zequn Shen, Guoying Gu
In this paper, we present a novel pneumatic soft gripper with a configurable workspace and perception to grasp various objects and recognize their sizes. The soft gripper consists of three pneu-net soft fingers embedded with resistive strain sensors and cascaded by a stretchable palm. The pneu-net soft fingers are fabricated through a lost-wax casting process. Each strain sensor is designed with an ionic hydrogel-elastomer hybrid structure and embedded into a soft finger to recognize its deformation. The stretchable palm is designed with an opening-closing parallel mechanism driven by a pneumatic fiber-reinforced soft actuator to modify the grasping workspace of the gripper. The characterization experiments are conducted to demonstrate the excellent performance of soft gripper. Based on the measurement of the strain sensors, we propose two kinds of grasping strategies for the soft gripper: a traditional finger-bending identification strategy (FBI strategy) without the active palm and a new palm-closing identification strategy (PCI strategy) with the active palm. Experimental results with an industrial robot demonstrate that our soft gripper with the PCI strategy can perform more robust picking tasks and more accurate identification tasks than with the FBI strategy.
{"title":"A Pneumatic Soft Gripper with Configurable Workspace and Self-sensing","authors":"Qiwen Shao, Ningbin Zhang, Zequn Shen, Guoying Gu","doi":"10.1109/UR49135.2020.9144922","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144922","url":null,"abstract":"In this paper, we present a novel pneumatic soft gripper with a configurable workspace and perception to grasp various objects and recognize their sizes. The soft gripper consists of three pneu-net soft fingers embedded with resistive strain sensors and cascaded by a stretchable palm. The pneu-net soft fingers are fabricated through a lost-wax casting process. Each strain sensor is designed with an ionic hydrogel-elastomer hybrid structure and embedded into a soft finger to recognize its deformation. The stretchable palm is designed with an opening-closing parallel mechanism driven by a pneumatic fiber-reinforced soft actuator to modify the grasping workspace of the gripper. The characterization experiments are conducted to demonstrate the excellent performance of soft gripper. Based on the measurement of the strain sensors, we propose two kinds of grasping strategies for the soft gripper: a traditional finger-bending identification strategy (FBI strategy) without the active palm and a new palm-closing identification strategy (PCI strategy) with the active palm. Experimental results with an industrial robot demonstrate that our soft gripper with the PCI strategy can perform more robust picking tasks and more accurate identification tasks than with the FBI strategy.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"303 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123233824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}