Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems最新文献
Pub Date : 2022-10-01Epub Date: 2022-12-26DOI: 10.1109/iros47612.2022.9982037
Cara G Welker, T Kevin Best, Robert D Gregg
Although the average healthy adult transitions from sit to stand over 60 times per day, most research on powered prosthesis control has only focused on walking. In this paper, we present a data-driven controller that enables sitting, standing, and walking with minimal tuning. Our controller comprises two high level modes of sit/stand and walking, and we develop heuristic biomechanical rules to control transitions. We use a phase variable based on the user's thigh angle to parameterize both walking and sit/stand motions, and use variable impedance control during ground contact and position control during swing. We extend previous work on data-driven optimization of continuous impedance parameter functions to design the sit/stand control mode using able-bodied data. Experiments with a powered knee-ankle prosthesis used by a participant with above-knee amputation demonstrate promise in clinical outcomes, as well as trade-offs between our minimal-tuning approach and accommodation of user preferences. Specifically, our controller enabled the participant to complete the sit/stand task 20% faster and reduced average asymmetry by half compared to his everyday passive prosthesis. The controller also facilitated a timed up and go test involving sitting, standing, walking, and turning, with only a mild (10%) decrease in speed compared to the everyday prosthesis. Our sit/stand/walk controller enables multiple activities of daily life with minimal tuning and mode switching.
{"title":"Data-Driven Variable Impedance Control of a Powered Knee-Ankle Prosthesis for Sit, Stand, and Walk with Minimal Tuning.","authors":"Cara G Welker, T Kevin Best, Robert D Gregg","doi":"10.1109/iros47612.2022.9982037","DOIUrl":"10.1109/iros47612.2022.9982037","url":null,"abstract":"<p><p>Although the average healthy adult transitions from sit to stand over 60 times per day, most research on powered prosthesis control has only focused on walking. In this paper, we present a data-driven controller that enables sitting, standing, and walking with minimal tuning. Our controller comprises two high level modes of sit/stand and walking, and we develop heuristic biomechanical rules to control transitions. We use a phase variable based on the user's thigh angle to parameterize both walking and sit/stand motions, and use variable impedance control during ground contact and position control during swing. We extend previous work on data-driven optimization of continuous impedance parameter functions to design the sit/stand control mode using able-bodied data. Experiments with a powered knee-ankle prosthesis used by a participant with above-knee amputation demonstrate promise in clinical outcomes, as well as trade-offs between our minimal-tuning approach and accommodation of user preferences. Specifically, our controller enabled the participant to complete the sit/stand task 20% faster and reduced average asymmetry by half compared to his everyday passive prosthesis. The controller also facilitated a timed up and go test involving sitting, standing, walking, and turning, with only a mild (10%) decrease in speed compared to the everyday prosthesis. Our sit/stand/walk controller enables multiple activities of daily life with minimal tuning and mode switching.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"2022 ","pages":"9660-9667"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9850431/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9166299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01Epub Date: 2022-12-26DOI: 10.1109/iros47612.2022.9981202
Alex J Chiluisa, Nicholas E Pacheco, Hoang S Do, Ryan M Tougas, Emily V Minch, Rositsa Mihaleva, Yao Shen, Yuxiang Liu, Thomas L Carroll, Loris Fichera
This paper reports the design, construction, and experimental validation of a novel hand-held robot for in-office laser surgery of the vocal folds. In-office endoscopic laser surgery is an emerging trend in Laryngology: It promises to deliver the same patient outcomes of traditional surgical treatment (i.e., in the operating room), at a fraction of the cost. Unfortunately, office procedures can be challenging to perform; the optical fibers used for laser delivery can only emit light forward in a line-of-sight fashion, which severely limits anatomical access. The robot we present in this paper aims to overcome these challenges. The end effector of the robot is a steerable laser fiber, created through the combination of a thin optical fiber (ϕ 0.225 mm) with a tendon-actuated Nickel-Titanium notched sheath that provides bending. This device can be seamlessly used with most commercially available endoscopes, as it is sufficiently small (ϕ 1.1 mm) to pass through a working channel. To control the fiber, we propose a compact actuation unit that can be mounted on top of the endoscope handle, so that, during a procedure, the operating physician can operate both the endoscope and the steerable fiber with a single hand. We report simulation and phantom experiments demonstrating that the proposed device substantially enhances surgical access compared to current clinical fibers.
{"title":"Light in the Larynx: a Miniaturized Robotic Optical Fiber for In-office Laser Surgery of the Vocal Folds.","authors":"Alex J Chiluisa, Nicholas E Pacheco, Hoang S Do, Ryan M Tougas, Emily V Minch, Rositsa Mihaleva, Yao Shen, Yuxiang Liu, Thomas L Carroll, Loris Fichera","doi":"10.1109/iros47612.2022.9981202","DOIUrl":"10.1109/iros47612.2022.9981202","url":null,"abstract":"<p><p>This paper reports the design, construction, and experimental validation of a novel hand-held robot for in-office laser surgery of the vocal folds. In-office endoscopic laser surgery is an emerging trend in Laryngology: It promises to deliver the same patient outcomes of traditional surgical treatment (i.e., in the operating room), at a fraction of the cost. Unfortunately, office procedures can be challenging to perform; the optical fibers used for laser delivery can only emit light forward in a line-of-sight fashion, which severely limits anatomical access. The robot we present in this paper aims to overcome these challenges. The end effector of the robot is a steerable laser fiber, created through the combination of a thin optical fiber (<i>ϕ</i> 0.225 mm) with a tendon-actuated Nickel-Titanium notched sheath that provides bending. This device can be seamlessly used with most commercially available endoscopes, as it is sufficiently small (<i>ϕ</i> 1.1 mm) to pass through a working channel. To control the fiber, we propose a compact actuation unit that can be mounted on top of the endoscope handle, so that, during a procedure, the operating physician can operate both the endoscope and the steerable fiber with a single hand. We report simulation and phantom experiments demonstrating that the proposed device substantially enhances surgical access compared to current clinical fibers.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"2022 ","pages":"427-434"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9875830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9142915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1109/iros47612.2022.9981124
Wenda Xu, Yujiong Liu, Pinhas Ben-Tzvi
This paper presents the design and development of a novel, low-profile, exoskeleton robotic glove aimed for people who suffer from brachial plexus injuries to restore their lost grasping functionality. The key idea of this new glove lies in its new finger mechanism that takes advantage of the rigid coupling hybrid mechanism (RCHM) concept. This mechanism concept couples the motions of the adjacent human finger links using rigid coupling mechanisms so that the overall mechanism motion (e.g., bending, extension, etc.) could be achieved using fewer actuators. The finger mechanism utilizes the single degree of freedom case of the RCHM that uses a rack-and-pinion mechanism as the rigid coupling mechanism. This special arrangement enables to design each finger mechanism of the glove as thin as possible while maintaining mechanical robustness simultaneously. Based on this novel finger mechanism, a two-finger low-profile robotic glove was developed. Remote center of motion mechanisms were used for the metacarpophalangeal (MCP) joints. Kinematic analysis and optimization-based kinematic synthesis were conducted to determine the design parameters of the new glove. Passive abduction/adduction joints were considered to improve the grasping flexibility. A proof-of-concept prototype was built and pinch grasping experiments of various objects were conducted. The results validated the mechanism and the mechanical design of the new robotic glove and demonstrated its functionalities and capabilities in grasping objects with various shapes and weights that are used in activities of daily living (ADLs).
{"title":"Development of a Novel Low-profile Robotic Exoskeleton Glove for Patients with Brachial Plexus Injuries.","authors":"Wenda Xu, Yujiong Liu, Pinhas Ben-Tzvi","doi":"10.1109/iros47612.2022.9981124","DOIUrl":"https://doi.org/10.1109/iros47612.2022.9981124","url":null,"abstract":"This paper presents the design and development of a novel, low-profile, exoskeleton robotic glove aimed for people who suffer from brachial plexus injuries to restore their lost grasping functionality. The key idea of this new glove lies in its new finger mechanism that takes advantage of the rigid coupling hybrid mechanism (RCHM) concept. This mechanism concept couples the motions of the adjacent human finger links using rigid coupling mechanisms so that the overall mechanism motion (e.g., bending, extension, etc.) could be achieved using fewer actuators. The finger mechanism utilizes the single degree of freedom case of the RCHM that uses a rack-and-pinion mechanism as the rigid coupling mechanism. This special arrangement enables to design each finger mechanism of the glove as thin as possible while maintaining mechanical robustness simultaneously. Based on this novel finger mechanism, a two-finger low-profile robotic glove was developed. Remote center of motion mechanisms were used for the metacarpophalangeal (MCP) joints. Kinematic analysis and optimization-based kinematic synthesis were conducted to determine the design parameters of the new glove. Passive abduction/adduction joints were considered to improve the grasping flexibility. A proof-of-concept prototype was built and pinch grasping experiments of various objects were conducted. The results validated the mechanism and the mechanical design of the new robotic glove and demonstrated its functionalities and capabilities in grasping objects with various shapes and weights that are used in activities of daily living (ADLs).","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"2022 ","pages":"11121-11126"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10250018/pdf/nihms-1854353.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9619789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01Epub Date: 2022-12-26DOI: 10.1109/iros47612.2022.9982227
Janine Hoelscher, Inbar Fried, Mengyu Fu, Mihir Patwardhan, Max Christman, Jason Akulian, Robert J Webster, Ron Alterovitz
Steerable needles are medical devices with the ability to follow curvilinear paths to reach targets while circumventing obstacles. In the deployment process, a human operator typically places the steerable needle at its start position on a tissue surface and then hands off control to the automation that steers the needle to the target. Due to uncertainty in the placement of the needle by the human operator, choosing a start position that is robust to deviations is crucial since some start positions may make it impossible for the steerable needle to safely reach the target. We introduce a method to efficiently evaluate steerable needle motion plans such that they are safe to variation in the start position. This method can be applied to many steerable needle planners and requires that the needle's orientation angle at insertion can be robotically controlled. Specifically, we introduce a method that builds a funnel around a given plan to determine a safe insertion surface corresponding to insertion points from which it is guaranteed that a collision-free motion plan to the goal can be computed. We use this technique to evaluate multiple feasible plans and select the one that maximizes the size of the safe insertion surface. We evaluate our method through simulation in a lung biopsy scenario and show that the method is able to quickly find needle plans with a large safe insertion surface.
{"title":"A Metric for Finding Robust Start Positions for Medical Steerable Needle Automation.","authors":"Janine Hoelscher, Inbar Fried, Mengyu Fu, Mihir Patwardhan, Max Christman, Jason Akulian, Robert J Webster, Ron Alterovitz","doi":"10.1109/iros47612.2022.9982227","DOIUrl":"10.1109/iros47612.2022.9982227","url":null,"abstract":"<p><p>Steerable needles are medical devices with the ability to follow curvilinear paths to reach targets while circumventing obstacles. In the deployment process, a human operator typically places the steerable needle at its start position on a tissue surface and then hands off control to the automation that steers the needle to the target. Due to uncertainty in the placement of the needle by the human operator, choosing a start position that is robust to deviations is crucial since some start positions may make it impossible for the steerable needle to safely reach the target. We introduce a method to efficiently evaluate steerable needle motion plans such that they are safe to variation in the start position. This method can be applied to many steerable needle planners and requires that the needle's orientation angle at insertion can be robotically controlled. Specifically, we introduce a method that builds a funnel around a given plan to determine a safe insertion surface corresponding to insertion points from which it is guaranteed that a collision-free motion plan to the goal can be computed. We use this technique to evaluate multiple feasible plans and select the one that maximizes the size of the safe insertion surface. We evaluate our method through simulation in a lung biopsy scenario and show that the method is able to quickly find needle plans with a large safe insertion surface.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"2022 ","pages":"9526-9533"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10162587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9790993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1109/iros47612.2022.9981856
Dimitri A Lezcano, Min Jung Kim, Iulian I Iordachita, Jin Seob Kim
Complex needle shape prediction remains an issue for planning of surgical interventions of flexible needles. In this paper, we validate a theoretical method for flexible needle shape prediction allowing for non-uniform curvatures, extending upon a previous sensor-based model which combines curvature measurements from fiber Bragg grating (FBG) sensors and the mechanics of an inextensible elastic rod to determine and predict the 3D needle shape during insertion. We evaluate the model's effectiveness in single-layer isotropic tissue for shape sensing and shape prediction capabilities. Experiments on a four-active area, FBG-sensorized needle were performed in varying single-layer isotropic tissues under stereo vision to provide 3D ground truth of the needle shape. The results validate a viable 3D needle shape prediction model accounting for non-uniform curvatures in flexible needles with mean needle shape sensing and prediction root-mean-square errors of 0.479 mm and 0.892 mm, respectively.
{"title":"Toward FBG-Sensorized Needle Shape Prediction in Tissue Insertions.","authors":"Dimitri A Lezcano, Min Jung Kim, Iulian I Iordachita, Jin Seob Kim","doi":"10.1109/iros47612.2022.9981856","DOIUrl":"https://doi.org/10.1109/iros47612.2022.9981856","url":null,"abstract":"<p><p>Complex needle shape prediction remains an issue for planning of surgical interventions of flexible needles. In this paper, we validate a theoretical method for flexible needle shape prediction allowing for non-uniform curvatures, extending upon a previous sensor-based model which combines curvature measurements from fiber Bragg grating (FBG) sensors and the mechanics of an inextensible elastic rod to determine and predict the 3D needle shape during insertion. We evaluate the model's effectiveness in single-layer isotropic tissue for shape sensing and shape prediction capabilities. Experiments on a four-active area, FBG-sensorized needle were performed in varying single-layer isotropic tissues under stereo vision to provide 3D ground truth of the needle shape. The results validate a viable 3D needle shape prediction model accounting for non-uniform curvatures in flexible needles with mean needle shape sensing and prediction root-mean-square errors of 0.479 mm and 0.892 mm, respectively.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"2022 ","pages":"3505-3511"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9832576/pdf/nihms-1861312.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10534888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/iros51168.2021.9636441
Will Pryor, Yotam Barnoy, Suraj Raval, Xiaolong Liu, Lamar Mair, Daniel Lerner, Onder Erin, Gregory D Hager, Yancy Diaz-Mercado, Axel Krieger
Real-time visual localization of needles is necessary for various surgical applications, including surgical automation and visual feedback. In this study we investigate localization and autonomous robotic control of needles in the context of our magneto-suturing system. Our system holds the potential for surgical manipulation with the benefit of minimal invasiveness and reduced patient side effects. However, the nonlinear magnetic fields produce unintuitive forces and demand delicate position-based control that exceeds the capabilities of direct human manipulation. This makes automatic needle localization a necessity. Our localization method combines neural network-based segmentation and classical techniques, and we are able to consistently locate our needle with 0.73 mm RMS error in clean environments and 2.72 mm RMS error in challenging environments with blood and occlusion. The average localization RMS error is 2.16 mm for all environments we used in the experiments. We combine this localization method with our closed-loop feedback control system to demonstrate the further applicability of localization to autonomous control. Our needle is able to follow a running suture path in (1) no blood, no tissue; (2) heavy blood, no tissue; (3) no blood, with tissue; and (4) heavy blood, with tissue environments. The tip position tracking error ranges from 2.6 mm to 3.7 mm RMS, opening the door towards autonomous suturing tasks.
针的实时视觉定位是各种手术应用所必需的,包括手术自动化和视觉反馈。在这项研究中,我们研究了在我们的磁缝合系统背景下针头的定位和自主机器人控制。我们的系统具有微创和减少患者副作用的手术操作潜力。然而,非线性磁场产生不直观的力,需要精细的基于位置的控制,这超出了人类直接操纵的能力。这使得自动定位针是必要的。我们的定位方法结合了基于神经网络的分割和经典技术,我们能够在清洁环境中以0.73 mm的RMS误差一致地定位针头,在具有血液和闭塞的挑战性环境中,我们能够以2.72 mm的RMS误差一致地定位针头。在所有实验环境下,平均定位均方根误差为2.16 mm。我们将这种定位方法与闭环反馈控制系统相结合,进一步证明了定位在自主控制中的适用性。我们的针能够在(1)没有血液,没有组织的情况下沿着连续的缝合路径;(2)血量大,无组织;(3)无血,有组织;(4)血重,有组织环境。尖端位置跟踪误差范围为2.6 mm至3.7 mm RMS,为自动缝合任务打开了大门。
{"title":"Localization and Control of Magnetic Suture Needles in Cluttered Surgical Site with Blood and Tissue.","authors":"Will Pryor, Yotam Barnoy, Suraj Raval, Xiaolong Liu, Lamar Mair, Daniel Lerner, Onder Erin, Gregory D Hager, Yancy Diaz-Mercado, Axel Krieger","doi":"10.1109/iros51168.2021.9636441","DOIUrl":"https://doi.org/10.1109/iros51168.2021.9636441","url":null,"abstract":"<p><p>Real-time visual localization of needles is necessary for various surgical applications, including surgical automation and visual feedback. In this study we investigate localization and autonomous robotic control of needles in the context of our magneto-suturing system. Our system holds the potential for surgical manipulation with the benefit of minimal invasiveness and reduced patient side effects. However, the nonlinear magnetic fields produce unintuitive forces and demand delicate position-based control that exceeds the capabilities of direct human manipulation. This makes automatic needle localization a necessity. Our localization method combines neural network-based segmentation and classical techniques, and we are able to consistently locate our needle with 0.73 mm RMS error in clean environments and 2.72 mm RMS error in challenging environments with blood and occlusion. The average localization RMS error is 2.16 mm for all environments we used in the experiments. We combine this localization method with our closed-loop feedback control system to demonstrate the further applicability of localization to autonomous control. Our needle is able to follow a running suture path in (1) no blood, no tissue; (2) heavy blood, no tissue; (3) no blood, with tissue; and (4) heavy blood, with tissue environments. The tip position tracking error ranges from 2.6 mm to 3.7 mm RMS, opening the door towards autonomous suturing tasks.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"2021 ","pages":"524-531"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8871455/pdf/nihms-1721262.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10363521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual place recognition (VPR) is critical in not only localization and mapping for autonomous driving vehicles, but also assistive navigation for the visually impaired population. To enable a long-term VPR system on a large scale, several challenges need to be addressed. First, different applications could require different image view directions, such as front views for self-driving cars while side views for the low vision people. Second, VPR in metropolitan scenes can often cause privacy concerns due to the imaging of pedestrian and vehicle identity information, calling for the need for data anonymization before VPR queries and database construction. Both factors could lead to VPR performance variations that are not well understood yet. To study their influences, we present the NYU-VPR dataset that contains more than 200,000 images over a 2km×2km area near the New York University campus, taken within the whole year of 2016. We present benchmark results on several popular VPR algorithms showing that side views are significantly more challenging for current VPR methods while the influence of data anonymization is almost negligible, together with our hypothetical explanations and in-depth analysis.
{"title":"NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences.","authors":"Diwei Sheng, Yuxiang Chai, Xinru Li, Chen Feng, Jianzhe Lin, Claudio Silva, John-Ross Rizzo","doi":"10.1109/iros51168.2021.9636640","DOIUrl":"10.1109/iros51168.2021.9636640","url":null,"abstract":"<p><p>Visual place recognition (VPR) is critical in not only localization and mapping for autonomous driving vehicles, but also assistive navigation for the visually impaired population. To enable a long-term VPR system on a large scale, several challenges need to be addressed. First, different applications could require different image view directions, such as front views for self-driving cars while side views for the low vision people. Second, VPR in metropolitan scenes can often cause privacy concerns due to the imaging of pedestrian and vehicle identity information, calling for the need for data anonymization before VPR queries and database construction. Both factors could lead to VPR performance variations that are not well understood yet. To study their influences, we present the NYU-VPR dataset that contains more than 200,000 images over a 2km×2km area near the New York University campus, taken within the whole year of 2016. We present benchmark results on several popular VPR algorithms showing that side views are significantly more challenging for current VPR methods while the influence of data anonymization is almost negligible, together with our hypothetical explanations and in-depth analysis.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":" ","pages":"9773-9779"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9394449/pdf/nihms-1827810.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40633245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/iros51168.2021.9636180
T Kevin Best, Kyle R Embry, Elliott J Rouse, Robert D Gregg
Most controllers for lower-limb robotic prostheses require individually tuned parameter sets for every combination of speed and incline that the device is designed for. Because ambulation occurs over a continuum of speeds and inclines, this design paradigm requires tuning of a potentially prohibitively large number of parameters. This limitation motivates an alternative control framework that enables walking over a range of speeds and inclines while requiring only a limited number of tunable parameters. In this work, we present the implementation of a continuously varying kinematic controller on a custom powered knee-ankle prosthesis. The controller uses a phase variable derived from the residual thigh angle, along with real-time estimates of ground inclination and walking speed, to compute the appropriate knee and ankle joint angles from a continuous model of able-bodied kinematic data. We modify an existing phase variable architecture to allow for changes in speeds and inclines, quantify the closed-loop accuracy of the speed and incline estimation algorithms for various references, and experimentally validate the controller by observing that it replicates kinematic trends seen in able-bodied gait as speed and incline vary.
{"title":"Phase-Variable Control of a Powered Knee-Ankle Prosthesis over Continuously Varying Speeds and Inclines.","authors":"T Kevin Best, Kyle R Embry, Elliott J Rouse, Robert D Gregg","doi":"10.1109/iros51168.2021.9636180","DOIUrl":"https://doi.org/10.1109/iros51168.2021.9636180","url":null,"abstract":"<p><p>Most controllers for lower-limb robotic prostheses require individually tuned parameter sets for every combination of speed and incline that the device is designed for. Because ambulation occurs over a continuum of speeds and inclines, this design paradigm requires tuning of a potentially prohibitively large number of parameters. This limitation motivates an alternative control framework that enables walking over a range of speeds and inclines while requiring only a limited number of tunable parameters. In this work, we present the implementation of a continuously varying kinematic controller on a custom powered knee-ankle prosthesis. The controller uses a phase variable derived from the residual thigh angle, along with real-time estimates of ground inclination and walking speed, to compute the appropriate knee and ankle joint angles from a continuous model of able-bodied kinematic data. We modify an existing phase variable architecture to allow for changes in speeds and inclines, quantify the closed-loop accuracy of the speed and incline estimation algorithms for various references, and experimentally validate the controller by observing that it replicates kinematic trends seen in able-bodied gait as speed and incline vary.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"2021 ","pages":"6182-6189"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8890507/pdf/nihms-1726048.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10351178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1109/iros51168.2021.9635902
Xihan Ma, Ziming Zhang, Haichong K Zhang
Under the ceaseless global COVID-19 pandemic, lung ultrasound (LUS) is the emerging way for effective diagnosis and severeness evaluation of respiratory diseases. However, close physical contact is unavoidable in conventional clinical ultrasound, increasing the infection risk for health-care workers. Hence, a scanning approach involving minimal physical contact between an operator and a patient is vital to maximize the safety of clinical ultrasound procedures. A robotic ultrasound platform can satisfy this need by remotely manipulating the ultrasound probe with a robotic arm. This paper proposes a robotic LUS system that incorporates the automatic identification and execution of the ultrasound probe placement pose without manual input. An RGB-D camera is utilized to recognize the scanning targets on the patient through a learning-based human pose estimation algorithm and solve for the landing pose to attach the probe vertically to the tissue surface; A position/force controller is designed to handle intraoperative probe pose adjustment for maintaining the contact force. We evaluated the scanning area localization accuracy, motion execution accuracy, and ultrasound image acquisition capability using an upper torso mannequin and a realistic lung ultrasound phantom with healthy and COVID-19-infected lung anatomy. Results demonstrated the overall scanning target localization accuracy of 19.67 ± 4.92 mm and the probe landing pose estimation accuracy of 6.92 ± 2.75 mm in translation, 10.35 ± 2.97 deg in rotation. The contact force-controlled robotic scanning allowed the successful ultrasound image collection, capturing pathological landmarks.
{"title":"Autonomous Scanning Target Localization for Robotic Lung Ultrasound Imaging.","authors":"Xihan Ma, Ziming Zhang, Haichong K Zhang","doi":"10.1109/iros51168.2021.9635902","DOIUrl":"https://doi.org/10.1109/iros51168.2021.9635902","url":null,"abstract":"<p><p>Under the ceaseless global COVID-19 pandemic, lung ultrasound (LUS) is the emerging way for effective diagnosis and severeness evaluation of respiratory diseases. However, close physical contact is unavoidable in conventional clinical ultrasound, increasing the infection risk for health-care workers. Hence, a scanning approach involving minimal physical contact between an operator and a patient is vital to maximize the safety of clinical ultrasound procedures. A robotic ultrasound platform can satisfy this need by remotely manipulating the ultrasound probe with a robotic arm. This paper proposes a robotic LUS system that incorporates the automatic identification and execution of the ultrasound probe placement pose without manual input. An RGB-D camera is utilized to recognize the scanning targets on the patient through a learning-based human pose estimation algorithm and solve for the landing pose to attach the probe vertically to the tissue surface; A position/force controller is designed to handle intraoperative probe pose adjustment for maintaining the contact force. We evaluated the scanning area localization accuracy, motion execution accuracy, and ultrasound image acquisition capability using an upper torso mannequin and a realistic lung ultrasound phantom with healthy and COVID-19-infected lung anatomy. Results demonstrated the overall scanning target localization accuracy of 19.67 ± 4.92 mm and the probe landing pose estimation accuracy of 6.92 ± 2.75 mm in translation, 10.35 ± 2.97 deg in rotation. The contact force-controlled robotic scanning allowed the successful ultrasound image collection, capturing pathological landmarks.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"2021 ","pages":"9467-9474"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9373068/pdf/nihms-1822595.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10351719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01Epub Date: 2021-12-16DOI: 10.1109/iros51168.2021.9636050
Guangshen Ma, Weston Ross, Patrick J Codd
This paper proposes an End-to-End stereovision-guided laser surgery system that can conduct laser ablation on targets selected by human operators in the color image, referred as StereoCNC. Two digital cameras are integrated into a previously developed robotic laser system to add a color sensing modality and formulate the stereovision. A calibration method is implemented to register the coordinate frames between stereo cameras and the laser system, modelled as a 3D-to-3D least-squares problem. The calibration reprojection errors are used to characterize a 3D error field by Gaussian Process Regression (GPR). This error field can make predictions for new point cloud data to identify an optimal position with lower calibration errors. A stereovision-guided laser ablation pipeline is proposed to optimize the positioning of the surgical site within the error field, which is achieved with a Genetic Algorithm search; mechanical stages move the site to the low-error region. The pipeline is validated by the experiments on phantoms with color texture and various geometric shapes. The overall targeting accuracy of the system achieved an average RMSE of 0.13 ± 0.02 mm and maximum error of 0.34 ± 0.06 mm, as measured by pre- and post-laser ablation images. The results show potential applications of using the developed stereovision-guided robotic system for superficial laser surgery, including dermatologic applications or removal of exposed tumorous tissue in neurosurgery.
{"title":"StereoCNC: A Stereovision-guided Robotic Laser System.","authors":"Guangshen Ma, Weston Ross, Patrick J Codd","doi":"10.1109/iros51168.2021.9636050","DOIUrl":"https://doi.org/10.1109/iros51168.2021.9636050","url":null,"abstract":"<p><p>This paper proposes an End-to-End stereovision-guided laser surgery system that can conduct laser ablation on targets selected by human operators in the color image, referred as <i>StereoCNC</i>. Two digital cameras are integrated into a previously developed robotic laser system to add a color sensing modality and formulate the stereovision. A calibration method is implemented to register the coordinate frames between stereo cameras and the laser system, modelled as a 3D-to-3D least-squares problem. The calibration reprojection errors are used to characterize a 3D error field by Gaussian Process Regression (GPR). This error field can make predictions for new point cloud data to identify an optimal position with lower calibration errors. A stereovision-guided laser ablation pipeline is proposed to optimize the positioning of the surgical site within the error field, which is achieved with a Genetic Algorithm search; mechanical stages move the site to the low-error region. The pipeline is validated by the experiments on phantoms with color texture and various geometric shapes. The overall targeting accuracy of the system achieved an average RMSE of 0.13 ± 0.02 <i>mm</i> and maximum error of 0.34 ± 0.06 <i>mm</i>, as measured by pre- and post-laser ablation images. The results show potential applications of using the developed stereovision-guided robotic system for superficial laser surgery, including dermatologic applications or removal of exposed tumorous tissue in neurosurgery.</p>","PeriodicalId":74523,"journal":{"name":"Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":" ","pages":"540-547"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9358620/pdf/nihms-1814504.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40685547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems