Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981633
Yuchi Ishikawa, Haruya Ishikawa, S. Akizuki, Masaki Yamazaki, Y. Taniguchi, Y. Aoki
We propose novel representations for functions of an object, namely Task-oriented Function, which is improved upon the idea of Afforadance in the field of Robotics Vision. We also propose a convolutional neural network to detect task-oriented functions. This network takes as input an operational task as well as an RGB image and assign each pixel an appropriate label for every task. Task-oriented funciton makes it possible to descibe various ways to use an object because the outputs from the network differ depending on operational tasks. We introduce a new dataset for task-oriented function detection, which contains about 1200 RGB images and 6000 pixel-level annotations assuming five tasks. Our proposed method reached 0.80 mean IOU in our dataset.
{"title":"Task-oriented Function Detection Based on Operational Tasks","authors":"Yuchi Ishikawa, Haruya Ishikawa, S. Akizuki, Masaki Yamazaki, Y. Taniguchi, Y. Aoki","doi":"10.1109/ICAR46387.2019.8981633","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981633","url":null,"abstract":"We propose novel representations for functions of an object, namely Task-oriented Function, which is improved upon the idea of Afforadance in the field of Robotics Vision. We also propose a convolutional neural network to detect task-oriented functions. This network takes as input an operational task as well as an RGB image and assign each pixel an appropriate label for every task. Task-oriented funciton makes it possible to descibe various ways to use an object because the outputs from the network differ depending on operational tasks. We introduce a new dataset for task-oriented function detection, which contains about 1200 RGB images and 6000 pixel-level annotations assuming five tasks. Our proposed method reached 0.80 mean IOU in our dataset.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"36 1","pages":"635-640"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89736537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981570
Kyle Lindgren, Sarah Leung, W. Nothwang, E. J. Shamwell
Machine learning has emerged as an extraordinary tool for solving many computer vision tasks by extracting and correlating meaningful features from high dimensional inputs in ways that often exceed the best human-derived modeling efforts. However, the area of vision-aided localization remains diverse with many traditional, model-based approaches (i.e. filtering- or nonlinear least- squares- based) often outperforming deep, model-free approaches. In this work, we present Bootstrapped Monocular VIO (BooM), a scaled monocular visual-inertial odometry (VIO) solution that leverages the complex data association ability of model-free approaches with the ability to exploit known geometric dynamics with model-based approaches. Our end-to-end, unsupervised deep neural network simultaneously learns to perform visual-inertial odometry and estimate scene depth while scale is enforced through a loss signal computed from position change magnitude estimates from traditional methods. We evaluate our network against a state-of-the-art (SoA) approach on the KITTI driving dataset as well as a micro aerial vehicle (MAV) dataset that we collected in the AirSim simulation environment. We further demonstrate the benefits of our combined approach through robustness tests on degraded trajectories.
{"title":"BooM-Vio: Bootstrapped Monocular Visual-Inertial Odometry with Absolute Trajectory Estimation through Unsupervised Deep Learning","authors":"Kyle Lindgren, Sarah Leung, W. Nothwang, E. J. Shamwell","doi":"10.1109/ICAR46387.2019.8981570","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981570","url":null,"abstract":"Machine learning has emerged as an extraordinary tool for solving many computer vision tasks by extracting and correlating meaningful features from high dimensional inputs in ways that often exceed the best human-derived modeling efforts. However, the area of vision-aided localization remains diverse with many traditional, model-based approaches (i.e. filtering- or nonlinear least- squares- based) often outperforming deep, model-free approaches. In this work, we present Bootstrapped Monocular VIO (BooM), a scaled monocular visual-inertial odometry (VIO) solution that leverages the complex data association ability of model-free approaches with the ability to exploit known geometric dynamics with model-based approaches. Our end-to-end, unsupervised deep neural network simultaneously learns to perform visual-inertial odometry and estimate scene depth while scale is enforced through a loss signal computed from position change magnitude estimates from traditional methods. We evaluate our network against a state-of-the-art (SoA) approach on the KITTI driving dataset as well as a micro aerial vehicle (MAV) dataset that we collected in the AirSim simulation environment. We further demonstrate the benefits of our combined approach through robustness tests on degraded trajectories.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"71 1","pages":"516-522"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89708563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981587
R. B. Neto, K. Ohno, Thomas Westfechtel, S. Tadokoro
One of the major challenges that autonomous cars are facing today is the unpredictability of pedestrian movement in urban environments. Since pedestrian data acquired by vehicles are sparse observed a pedestrian flow directed graph is proposed to understand pedestrian behavior. In this work, an autonomous electric vehicle is employed to gather LiDAR and camera data. Pedestrian tracking information and semantic information from the environment are used with a probabilistic approach to create the graph. In order to refine the graph a set of outlier removal techniques are described. The graph-based pedestrian flow shows an increase of 61.29 % of coverage zone, and the outlier removal approach successfully removed 81 % of the edges.
{"title":"Pedestrian Flow Estimation Using Sparse Observation for Autonomous Vehicles","authors":"R. B. Neto, K. Ohno, Thomas Westfechtel, S. Tadokoro","doi":"10.1109/ICAR46387.2019.8981587","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981587","url":null,"abstract":"One of the major challenges that autonomous cars are facing today is the unpredictability of pedestrian movement in urban environments. Since pedestrian data acquired by vehicles are sparse observed a pedestrian flow directed graph is proposed to understand pedestrian behavior. In this work, an autonomous electric vehicle is employed to gather LiDAR and camera data. Pedestrian tracking information and semantic information from the environment are used with a probabilistic approach to create the graph. In order to refine the graph a set of outlier removal techniques are described. The graph-based pedestrian flow shows an increase of 61.29 % of coverage zone, and the outlier removal approach successfully removed 81 % of the edges.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"68 1","pages":"779-784"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91379173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981553
J. Smet, E. V. Poorten, V. Poliakov, Kenan Niu, Frédérique Chesterman, J. Fornier, M. Ahmad, M. Ourak, Viktor Vörös, J. Deprest
During laparoscopic sacrocolpopexy, pelvic organ prolapse is repaired by suturing one side of a synthetic mesh around the vaginal vault while stapling the other end to the sacrum, restoring the anatomical position of the vagina. A perineal assistant positions and tensions the vault with a vaginal manipulator instrument to properly expose the vaginal tissue to the laparoscopic surgeon. A technical difficulty during this surgery is the loss of depth perception due to visualization of the patient's internals on a 2D screen. Especially during precise surgical tasks, a more natural way to understand the distance between the laparoscopic instruments and the surgical region of interest could be advantageous. This work describes an exploratory study to investigate the potential of introducing 3D visualization into this surgical intervention. More in particular, experimentation is conducted with autostereoscopic display technology. A mixed reality setup was constructed featuring a virtual reality model of the vagina, 2D and 3D visualization, a physical interface representing the tissue of the body wall and a tracking system to track instrument motion. An experiment was conducted whereby the participants had to navigate the instrument to a number of pre-defined locations under 2D or 3D visualization. Compared to 2D, a considerable reduction in average task time (-42.9 %), travelled path lenght (-31.8 %) and errors (-52.2 %) was observed when performing the experiment in 3D. Where this work demonstrated a potential benefit of autostereoscopic visualization with respect to 2D visualization, in future work we wish to investigate if there also exists a benefit when comparing this technology with conventional stereoscopic visualization and whether stereoscopy can be used for (semi-) automated guidance during robotic laparoscopy.
{"title":"Evaluating the Potential Benefit of Autostereoscopy in Laparoscopic Sacrocolpopexy through VR Simulation","authors":"J. Smet, E. V. Poorten, V. Poliakov, Kenan Niu, Frédérique Chesterman, J. Fornier, M. Ahmad, M. Ourak, Viktor Vörös, J. Deprest","doi":"10.1109/ICAR46387.2019.8981553","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981553","url":null,"abstract":"During laparoscopic sacrocolpopexy, pelvic organ prolapse is repaired by suturing one side of a synthetic mesh around the vaginal vault while stapling the other end to the sacrum, restoring the anatomical position of the vagina. A perineal assistant positions and tensions the vault with a vaginal manipulator instrument to properly expose the vaginal tissue to the laparoscopic surgeon. A technical difficulty during this surgery is the loss of depth perception due to visualization of the patient's internals on a 2D screen. Especially during precise surgical tasks, a more natural way to understand the distance between the laparoscopic instruments and the surgical region of interest could be advantageous. This work describes an exploratory study to investigate the potential of introducing 3D visualization into this surgical intervention. More in particular, experimentation is conducted with autostereoscopic display technology. A mixed reality setup was constructed featuring a virtual reality model of the vagina, 2D and 3D visualization, a physical interface representing the tissue of the body wall and a tracking system to track instrument motion. An experiment was conducted whereby the participants had to navigate the instrument to a number of pre-defined locations under 2D or 3D visualization. Compared to 2D, a considerable reduction in average task time (-42.9 %), travelled path lenght (-31.8 %) and errors (-52.2 %) was observed when performing the experiment in 3D. Where this work demonstrated a potential benefit of autostereoscopic visualization with respect to 2D visualization, in future work we wish to investigate if there also exists a benefit when comparing this technology with conventional stereoscopic visualization and whether stereoscopy can be used for (semi-) automated guidance during robotic laparoscopy.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"26 1","pages":"566-571"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91029830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981646
A. Geraldes, P. Fiorini, L. Mattos
Endoscopic laser surgery is a minimally invasive procedure in which a fiber laser tool is used to perform precise incisions in soft tissue. Although the precision of such incisions depends on the proper focusing of the laser, endoscopic laser tools use no optics at all, due to the limited space in the endoscopic system. Instead, they rely on placing the tip of the fiber in direct contact with the tissue, which often leads to tissue carbonization. To solve this problem, we developed a compact auto-focusing system based on a MEMS varifocal mirror. The proposed system is able to ensure the focusing of the laser by controlling the deflection of the varifocal mirror using hydraulic actuation. Validation experiments showed that the system is able to keep the variation of the laser spot diameter under 3% for a distance range between 12.15 and 52.15 mm.
{"title":"An Auto-Focusing System for Endoscopic Laser Surgery based on a Hydraulic MEMS Varifocal Mirror","authors":"A. Geraldes, P. Fiorini, L. Mattos","doi":"10.1109/ICAR46387.2019.8981646","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981646","url":null,"abstract":"Endoscopic laser surgery is a minimally invasive procedure in which a fiber laser tool is used to perform precise incisions in soft tissue. Although the precision of such incisions depends on the proper focusing of the laser, endoscopic laser tools use no optics at all, due to the limited space in the endoscopic system. Instead, they rely on placing the tip of the fiber in direct contact with the tissue, which often leads to tissue carbonization. To solve this problem, we developed a compact auto-focusing system based on a MEMS varifocal mirror. The proposed system is able to ensure the focusing of the laser by controlling the deflection of the varifocal mirror using hydraulic actuation. Validation experiments showed that the system is able to keep the variation of the laser spot diameter under 3% for a distance range between 12.15 and 52.15 mm.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"11 1 1","pages":"660-665"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76548056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981545
K. Alexis
This paper presents a comprehensive solution to enable resilient autonomous exploration and mapping of underground mines using aerial robots. The described methods and systems address critical challenges related to autonomy, perception and localization in conditions of sensor degradation, exploratory path planning in geometrically complex, large and multi-branching environments, alongside reliable robot operation in communications-denied settings. To facilitate resilient autonomy in such conditions, a set of novel contributions in multi-modal sensor fusion, graph-based path planning, and robot design have been proposed and integrated in micro aerial vehicles which are not subject to the challenging terrain found in such subterranean settings. The capabilities and performance of the proposed solution is field-verified through a set of real-life autonomous deployments in underground metal mines.
{"title":"Resilient Autonomous Exploration and Mapping of Underground Mines using Aerial Robots","authors":"K. Alexis","doi":"10.1109/ICAR46387.2019.8981545","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981545","url":null,"abstract":"This paper presents a comprehensive solution to enable resilient autonomous exploration and mapping of underground mines using aerial robots. The described methods and systems address critical challenges related to autonomy, perception and localization in conditions of sensor degradation, exploratory path planning in geometrically complex, large and multi-branching environments, alongside reliable robot operation in communications-denied settings. To facilitate resilient autonomy in such conditions, a set of novel contributions in multi-modal sensor fusion, graph-based path planning, and robot design have been proposed and integrated in micro aerial vehicles which are not subject to the challenging terrain found in such subterranean settings. The capabilities and performance of the proposed solution is field-verified through a set of real-life autonomous deployments in underground metal mines.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"53 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76939514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981660
Adriano M. C. Rezende, Victor R. F. Miranda, Henrique N. Machado, Antonio C. B. Chiella, V. M. Gonçalves, G. Freitas
In this paper, we present a methodology to make an autonomous drone fly through a sequence of gates only with on-board sensors. Our work is a solution to the AlphaPilot Challenge, proposed by the Lookheed Martin Company and the Drone Racing League. First, we propose a strategy to generate a smooth trajectory that passes through the gates. Then, we develop a localization system, which merges image data from an on-board camera with IMU data. Finally, we present an artificial vector field based strategy used to control the quadcopter. Our results are validated with simulations in the official simulator of the competition and with preliminary experiments with a real drone.
{"title":"Autonomous System for a Racing Quadcopter","authors":"Adriano M. C. Rezende, Victor R. F. Miranda, Henrique N. Machado, Antonio C. B. Chiella, V. M. Gonçalves, G. Freitas","doi":"10.1109/ICAR46387.2019.8981660","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981660","url":null,"abstract":"In this paper, we present a methodology to make an autonomous drone fly through a sequence of gates only with on-board sensors. Our work is a solution to the AlphaPilot Challenge, proposed by the Lookheed Martin Company and the Drone Racing League. First, we propose a strategy to generate a smooth trajectory that passes through the gates. Then, we develop a localization system, which merges image data from an on-board camera with IMU data. Finally, we present an artificial vector field based strategy used to control the quadcopter. Our results are validated with simulations in the official simulator of the competition and with preliminary experiments with a real drone.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"10 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83707078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981579
J. D. M. Osorio, F. Allmendinger, M. D. Fiore, U. Zimmermann, T. Ortmaier
This paper handles the problem of including Cartesian and joint constraints in the stack of tasks for torque-controlled robots. A novel approach is proposed to handle Cartesian constraints and joint constraints on three different levels: position, velocity and acceleration. These constraints are included in the stack of tasks ensuring the maximum possible fulfillment of the tasks despite of the constraints. The algorithm proceeds by creating two tasks with the highest priority in a stack of tasks scheme. The highest priority task saturates the acceleration of the joints that would exceed their motion limits. The second highest priority task saturates the acceleration of the Cartesian directions that would exceed their motion limits. Experiments to test the performance of the algorithm are performed on the KUKA LBR iiwa.
{"title":"Physical Human-Robot Interaction under Joint and Cartesian Constraints","authors":"J. D. M. Osorio, F. Allmendinger, M. D. Fiore, U. Zimmermann, T. Ortmaier","doi":"10.1109/ICAR46387.2019.8981579","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981579","url":null,"abstract":"This paper handles the problem of including Cartesian and joint constraints in the stack of tasks for torque-controlled robots. A novel approach is proposed to handle Cartesian constraints and joint constraints on three different levels: position, velocity and acceleration. These constraints are included in the stack of tasks ensuring the maximum possible fulfillment of the tasks despite of the constraints. The algorithm proceeds by creating two tasks with the highest priority in a stack of tasks scheme. The highest priority task saturates the acceleration of the joints that would exceed their motion limits. The second highest priority task saturates the acceleration of the Cartesian directions that would exceed their motion limits. Experiments to test the performance of the algorithm are performed on the KUKA LBR iiwa.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"13 1","pages":"185-191"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78275345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981621
Kedar Karpe, Ayon Chatterjee, P. Srinivas, Dhanalakshmi Samiappan, Kumar Ramamoorthy, Lorenzo Sabattini
This paper presents SPRINTER, a system and method for multi-robot printing. In this paper, we discuss the design of a quasi-holonomic mobile robot and present a method which uses a group of such robots to distributively print a large graphical image. In the distributive printing method, we introduce the concept of image cellularization for segmenting the graphic into a group of smaller printing tasks. We then discuss a centralized method to allocate these tasks to each robot and execute the printing process. In summary, we present a multi-robot printing system which enhances the printing speed and maximizes the printing area of traditional industrial printers.
{"title":"SPRINTER: A Discrete Locomotion Robot for Precision Swarm Printing","authors":"Kedar Karpe, Ayon Chatterjee, P. Srinivas, Dhanalakshmi Samiappan, Kumar Ramamoorthy, Lorenzo Sabattini","doi":"10.1109/ICAR46387.2019.8981621","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981621","url":null,"abstract":"This paper presents SPRINTER, a system and method for multi-robot printing. In this paper, we discuss the design of a quasi-holonomic mobile robot and present a method which uses a group of such robots to distributively print a large graphical image. In the distributive printing method, we introduce the concept of image cellularization for segmenting the graphic into a group of smaller printing tasks. We then discuss a centralized method to allocate these tasks to each robot and execute the printing process. In summary, we present a multi-robot printing system which enhances the printing speed and maximizes the printing area of traditional industrial printers.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"27 1","pages":"733-738"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85991948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981634
Huan Liu, Qitao Huang, Zhizhong Tong
This paper presents the design and control architecture of a novel full-active powered ankle prosthesis which uses integrated force-controllable electro-hydrostatic actuator (EHA) to provide both initiative compliance and sufficient positive power output at terminal stance to assist walking in whole gait cycle. A 100W brushless DC motor driving a 0.45 cc/rev bi-directional gear pump operates as the power kernel. Based on finite-state machine (FSM), a hierarchical controller was designed to ensure the control system performance while different control strategies were implemented on each individual gait phase. Three independent force sensing resistor (FSR) mounted under sole, two pressure transducers and a displacement sensor used as ankle rotation sensor provide feedback signal for both state detection and low-level impedance control. A simulation model of the ankle prosthesis system was established with the help of Matlab/Simulink to validate its feasibility. Using pre-sampled biomechanics profile as input variable and matched group, the conceptual ankle prosthesis turns out to be able to restore the dynamic interaction response of a wholesome ankle-foot to a great extent.
{"title":"Simulation and Analysis of a Full-Active Electro-Hydrostatic Powered Ankle Prosthesis","authors":"Huan Liu, Qitao Huang, Zhizhong Tong","doi":"10.1109/ICAR46387.2019.8981634","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981634","url":null,"abstract":"This paper presents the design and control architecture of a novel full-active powered ankle prosthesis which uses integrated force-controllable electro-hydrostatic actuator (EHA) to provide both initiative compliance and sufficient positive power output at terminal stance to assist walking in whole gait cycle. A 100W brushless DC motor driving a 0.45 cc/rev bi-directional gear pump operates as the power kernel. Based on finite-state machine (FSM), a hierarchical controller was designed to ensure the control system performance while different control strategies were implemented on each individual gait phase. Three independent force sensing resistor (FSR) mounted under sole, two pressure transducers and a displacement sensor used as ankle rotation sensor provide feedback signal for both state detection and low-level impedance control. A simulation model of the ankle prosthesis system was established with the help of Matlab/Simulink to validate its feasibility. Using pre-sampled biomechanics profile as input variable and matched group, the conceptual ankle prosthesis turns out to be able to restore the dynamic interaction response of a wholesome ankle-foot to a great extent.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"1 1","pages":"81-86"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88276756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}