Pub Date : 2023-12-20DOI: 10.1177/02783649231204898
Andrej Kitanov, V. Indelman
Determining a globally optimal solution of belief space planning (BSP) in high-dimensional state spaces directly is computationally expensive, as it involves belief propagation and objective function evaluation for each candidate action. However, many problems of interest, such as active SLAM, exhibit structure that can be harnessed to expedite planning. Also, in order to choose an optimal action, an exact value of the objective function is not required as long as the actions can be sorted in the same way. In this paper we leverage these two key aspects and present the topological belief space planning (t-bsp) concept that uses topological signatures to perform this ranking for information-theoretic cost functions, considering only topologies of factor graphs that correspond to future posterior beliefs. In particular, we propose a highly efficient topological signature based on the von Neumann graph entropy that is a function of graph node degrees and supports an incremental update. We analyze it in the context of active pose SLAM and derive error bounds between the proposed topological signature and the original information-theoretic cost function. These bounds are then used to provide performance guarantees for t-bsp with respect to a given solver of the original information-theoretic BSP problem. Realistic and synthetic simulations demonstrate drastic speed-up of the proposed method with respect to the state-of-the-art methods while retaining the ability to select a near-optimal solution. A proof of concept of t-bsp is given in a small-scale real-world active SLAM experiment.
在高维状态空间中直接确定信念空间规划(BSP)的全局最优解耗资巨大,因为这涉及信念传播和每个候选行动的目标函数评估。然而,许多令人感兴趣的问题(如主动式 SLAM)都呈现出结构性,可以利用这种结构性来加快规划速度。此外,为了选择最优行动,只要行动能以相同的方式排序,就不需要目标函数的精确值。在本文中,我们利用这两个关键方面,提出了拓扑信念空间规划(t-bsp)概念,该概念利用拓扑特征对信息论成本函数进行排序,只考虑与未来后验信念相对应的因子图拓扑。我们特别提出了一种基于冯-诺依曼图熵的高效拓扑签名,它是图节点度的函数,支持增量更新。我们在主动姿态 SLAM 的背景下对其进行了分析,并得出了所提出的拓扑签名与原始信息论成本函数之间的误差界限。然后,我们利用这些界限为 t-bsp 提供性能保证,使其相对于原始信息论 BSP 问题的给定求解器。实际和合成仿真表明,与最先进的方法相比,所提出的方法大大加快了速度,同时还保留了选择接近最优解的能力。t-bsp 的概念在一个小规模的真实世界主动 SLAM 实验中得到了证明。
{"title":"Topological belief space planning for active SLAM with pairwise Gaussian potentials and performance guarantees","authors":"Andrej Kitanov, V. Indelman","doi":"10.1177/02783649231204898","DOIUrl":"https://doi.org/10.1177/02783649231204898","url":null,"abstract":"Determining a globally optimal solution of belief space planning (BSP) in high-dimensional state spaces directly is computationally expensive, as it involves belief propagation and objective function evaluation for each candidate action. However, many problems of interest, such as active SLAM, exhibit structure that can be harnessed to expedite planning. Also, in order to choose an optimal action, an exact value of the objective function is not required as long as the actions can be sorted in the same way. In this paper we leverage these two key aspects and present the topological belief space planning (t-bsp) concept that uses topological signatures to perform this ranking for information-theoretic cost functions, considering only topologies of factor graphs that correspond to future posterior beliefs. In particular, we propose a highly efficient topological signature based on the von Neumann graph entropy that is a function of graph node degrees and supports an incremental update. We analyze it in the context of active pose SLAM and derive error bounds between the proposed topological signature and the original information-theoretic cost function. These bounds are then used to provide performance guarantees for t-bsp with respect to a given solver of the original information-theoretic BSP problem. Realistic and synthetic simulations demonstrate drastic speed-up of the proposed method with respect to the state-of-the-art methods while retaining the ability to select a near-optimal solution. A proof of concept of t-bsp is given in a small-scale real-world active SLAM experiment.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"99 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138954000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Collision avoidance presents a challenging problem for multi-segment continuum robots owing to their flexible structure, limited workspaces, and restricted visual feedback, particularly when they are used in teleoperated minimally invasive surgery. This study proposes a comprehensive control framework that allows these continuum robots to automatically avoid collision and self-collision without interfering with the surgeon’s control of the end effector’s movement. The framework implements the early detection of collisions and active avoidance strategies by expressing the body geometry of the multi-segment continuum robot and the differential kinematics of any cross-section using screw theory. With the robot’s parameterized shape and selected checkpoints on the obstacle’s surface, we can determine the minimum distance between the robot and arbitrary obstacle, and locate the nearest point on the robot. Furthermore, we expand the null-space-based control method to accommodate redundant, non-redundant, and multiple continuum robots. An assessment of the avoidance capability is provided through an instantaneous and global criterion based on ellipsoids and possible movement ranges. Simulations and physical experiments involving continuum robots of different degrees of freedom performing various tasks were conducted to thoroughly validate the proposed framework. The results demonstrated its feasibility and effectiveness in minimizing the risk of collisions while maintaining the surgeon’s control over the end effector.
{"title":"Active collision avoidance for teleoperated multi-segment continuum robots toward minimally invasive surgery","authors":"Jianhua Li, Dingjia Li, Chongyang Wang, Wei Guo, Zhidong Wang, Zhongtao Zhang, Hao Liu","doi":"10.1177/02783649231220955","DOIUrl":"https://doi.org/10.1177/02783649231220955","url":null,"abstract":"Collision avoidance presents a challenging problem for multi-segment continuum robots owing to their flexible structure, limited workspaces, and restricted visual feedback, particularly when they are used in teleoperated minimally invasive surgery. This study proposes a comprehensive control framework that allows these continuum robots to automatically avoid collision and self-collision without interfering with the surgeon’s control of the end effector’s movement. The framework implements the early detection of collisions and active avoidance strategies by expressing the body geometry of the multi-segment continuum robot and the differential kinematics of any cross-section using screw theory. With the robot’s parameterized shape and selected checkpoints on the obstacle’s surface, we can determine the minimum distance between the robot and arbitrary obstacle, and locate the nearest point on the robot. Furthermore, we expand the null-space-based control method to accommodate redundant, non-redundant, and multiple continuum robots. An assessment of the avoidance capability is provided through an instantaneous and global criterion based on ellipsoids and possible movement ranges. Simulations and physical experiments involving continuum robots of different degrees of freedom performing various tasks were conducted to thoroughly validate the proposed framework. The results demonstrated its feasibility and effectiveness in minimizing the risk of collisions while maintaining the surgeon’s control over the end effector.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":" 38","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138995185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-18DOI: 10.1177/02783649231218720
Huan Nguyen, R. Andersen, Evangelos Boukas, Kostas Alexis
Autonomous navigation and information gathering in challenging environments are demanding since the robot’s sensors may be susceptible to non-negligible noise, its localization and mapping may be subject to significant uncertainty and drift, and performing collision-checking or evaluating utility functions using a map often requires high computational costs. We propose a learning-based method to efficiently tackle this problem without relying on a map of the environment or the robot’s position. Our method utilizes a Collision Prediction Network (CPN) for predicting the collision scores of a set of action sequences, and an Information gain Prediction Network (IPN) for estimating their associated information gain. Both networks assume access to a) the depth image (CPN) or the depth image and the detection mask from any visual method (IPN), b) the robot’s partial state (including its linear velocities, z-axis angular velocity, and roll/pitch angles), and c) a library of action sequences. Specifically, the CPN accounts for the estimation uncertainty of the robot’s partial state and the neural network’s epistemic uncertainty by using the Unscented Transform and an ensemble of neural networks. The outputs of the networks are combined with a goal vector to identify the next-best-action sequence. Simulation studies demonstrate the method’s robustness against noisy robot velocity estimates and depth images, alongside its advantages compared to state-of-the-art methods and baselines in (visually-attentive) navigation tasks. Lastly, multiple real-world experiments are presented, including safe flights at 2.5 m/s in a cluttered corridor, and missions inside a dense forest alongside visually-attentive navigation in industrial and university buildings.
{"title":"Uncertainty-aware visually-attentive navigation using deep neural networks","authors":"Huan Nguyen, R. Andersen, Evangelos Boukas, Kostas Alexis","doi":"10.1177/02783649231218720","DOIUrl":"https://doi.org/10.1177/02783649231218720","url":null,"abstract":"Autonomous navigation and information gathering in challenging environments are demanding since the robot’s sensors may be susceptible to non-negligible noise, its localization and mapping may be subject to significant uncertainty and drift, and performing collision-checking or evaluating utility functions using a map often requires high computational costs. We propose a learning-based method to efficiently tackle this problem without relying on a map of the environment or the robot’s position. Our method utilizes a Collision Prediction Network (CPN) for predicting the collision scores of a set of action sequences, and an Information gain Prediction Network (IPN) for estimating their associated information gain. Both networks assume access to a) the depth image (CPN) or the depth image and the detection mask from any visual method (IPN), b) the robot’s partial state (including its linear velocities, z-axis angular velocity, and roll/pitch angles), and c) a library of action sequences. Specifically, the CPN accounts for the estimation uncertainty of the robot’s partial state and the neural network’s epistemic uncertainty by using the Unscented Transform and an ensemble of neural networks. The outputs of the networks are combined with a goal vector to identify the next-best-action sequence. Simulation studies demonstrate the method’s robustness against noisy robot velocity estimates and depth images, alongside its advantages compared to state-of-the-art methods and baselines in (visually-attentive) navigation tasks. Lastly, multiple real-world experiments are presented, including safe flights at 2.5 m/s in a cluttered corridor, and missions inside a dense forest alongside visually-attentive navigation in industrial and university buildings.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"114 s431","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138965036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modeling and manipulating elasto-plastic objects are essential capabilities for robots to perform complex industrial and household interaction tasks (e.g., stuffing dumplings, rolling sushi, and making pottery). However, due to the high degrees of freedom of elasto-plastic objects, significant challenges exist in virtually every aspect of the robotic manipulation pipeline, for example, representing the states, modeling the dynamics, and synthesizing the control signals. We propose to tackle these challenges by employing a particle-based representation for elasto-plastic objects in a model-based planning framework. Our system, RoboCraft, only assumes access to raw RGBD visual observations. It transforms the sensory data into particles and learns a particle-based dynamics model using graph neural networks (GNNs) to capture the structure of the underlying system. The learned model can then be coupled with model predictive control (MPC) algorithms to plan the robot’s behavior. We show through experiments that with just 10 min of real-world robot interaction data, our robot can learn a dynamics model that can be used to synthesize control signals to deform elasto-plastic objects into various complex target shapes, including shapes that the robot has never encountered before. We perform systematic evaluations in both simulation and the real world to demonstrate the robot’s manipulation capabilities.
{"title":"RoboCraft: Learning to see, simulate, and shape elasto-plastic objects in 3D with graph networks","authors":"Haochen Shi, Huazhe Xu, Zhiao Huang, Yunzhu Li, Jiajun Wu","doi":"10.1177/02783649231219020","DOIUrl":"https://doi.org/10.1177/02783649231219020","url":null,"abstract":"Modeling and manipulating elasto-plastic objects are essential capabilities for robots to perform complex industrial and household interaction tasks (e.g., stuffing dumplings, rolling sushi, and making pottery). However, due to the high degrees of freedom of elasto-plastic objects, significant challenges exist in virtually every aspect of the robotic manipulation pipeline, for example, representing the states, modeling the dynamics, and synthesizing the control signals. We propose to tackle these challenges by employing a particle-based representation for elasto-plastic objects in a model-based planning framework. Our system, RoboCraft, only assumes access to raw RGBD visual observations. It transforms the sensory data into particles and learns a particle-based dynamics model using graph neural networks (GNNs) to capture the structure of the underlying system. The learned model can then be coupled with model predictive control (MPC) algorithms to plan the robot’s behavior. We show through experiments that with just 10 min of real-world robot interaction data, our robot can learn a dynamics model that can be used to synthesize control signals to deform elasto-plastic objects into various complex target shapes, including shapes that the robot has never encountered before. We perform systematic evaluations in both simulation and the real world to demonstrate the robot’s manipulation capabilities.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"64 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139173855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grasping is a key task for robots to interact with humans and the environment. Soft grippers have been widely studied and some have been applied in industry and daily life. Typical soft grippers face two challenges: lack of stiffness and insufficient adaptability to various objects. Inspired by the human hand, this paper proposes a soft-rigid hybrid pneumatic gripper composed of fingers with soft skin and rigid endoskeletons, and an active palm. Through different combinations of the four joints’ locking states within the rigid endoskeleton, each finger obtains 9 different postures in its inflating state and 13 different postures in its deflating state, endowing the gripper with the capability of adapting to a wider variety of objects. Simultaneously, due to the endoskeletons, the lateral stiffness of the gripper is significantly enhanced (load-to-weight ratio∼7.5 for lateral grasping). We also propose a series of grasping strategies for grasping objects with different sizes and shapes to utilize the versatile configurations of the gripper. Experiments demonstrated that the gripper conformed well to the surfaces of cylindrical and prismatic objects and successfully grasped all tool items and shape items in the Yale–CMU–Berkeley object set.
{"title":"Design and implementation of an underactuated gripper with enhanced shape adaptability and lateral stiffness through semi-active multi-degree-of-freedom endoskeletons","authors":"Yafeng Cui, Xin An, Zhonghan Lin, Zhibin Guo, Xin-Jun Liu, Huichan Zhao","doi":"10.1177/02783649231220674","DOIUrl":"https://doi.org/10.1177/02783649231220674","url":null,"abstract":"Grasping is a key task for robots to interact with humans and the environment. Soft grippers have been widely studied and some have been applied in industry and daily life. Typical soft grippers face two challenges: lack of stiffness and insufficient adaptability to various objects. Inspired by the human hand, this paper proposes a soft-rigid hybrid pneumatic gripper composed of fingers with soft skin and rigid endoskeletons, and an active palm. Through different combinations of the four joints’ locking states within the rigid endoskeleton, each finger obtains 9 different postures in its inflating state and 13 different postures in its deflating state, endowing the gripper with the capability of adapting to a wider variety of objects. Simultaneously, due to the endoskeletons, the lateral stiffness of the gripper is significantly enhanced (load-to-weight ratio∼7.5 for lateral grasping). We also propose a series of grasping strategies for grasping objects with different sizes and shapes to utilize the versatile configurations of the gripper. Experiments demonstrated that the gripper conformed well to the surfaces of cylindrical and prismatic objects and successfully grasped all tool items and shape items in the Yale–CMU–Berkeley object set.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"26 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138971669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1177/02783649231200595
Jialei Shi, A. Shariati, Sara-Adela Abad, Yuanchang Liu, Jian S Dai, Helge Wurdemann
Soft robots have been investigated for various applications due to their inherently superior deformability and flexibility compared to rigid-link robots. However, these robots struggle to perform tasks that require on-demand stiffness, that is, exerting sufficient forces within allowable deflection. In addition, the soft and compliant materials also introduce large deformation and non-negligible nonlinearity, which makes the stiffness analysis and modelling of soft robots fundamentally challenging. This paper proposes a modelling framework to investigate the underlying stiffness and the equivalent compliance properties of soft robots under different configurations. Firstly, a modelling and analysis methodology is described based on Lie theory. Here, we derive two sets (the piecewise constant curvature and Cosserat rod model) of compliance models. Furthermore, the methodology can accommodate the nonlinear responses (e.g., bending angles) resulting from elongation of robots. Using this proposed methodology, the general Cartesian stiffness or compliance matrix can be derived and used for configuration-dependent stiffness analysis. The proposed framework is then instantiated and implemented on fluidic-driven soft continuum robots. The efficacy and modelling accuracy of the proposed methodology are validated using both simulations and experiments.
{"title":"Stiffness modelling and analysis of soft fluidic-driven robots using Lie theory","authors":"Jialei Shi, A. Shariati, Sara-Adela Abad, Yuanchang Liu, Jian S Dai, Helge Wurdemann","doi":"10.1177/02783649231200595","DOIUrl":"https://doi.org/10.1177/02783649231200595","url":null,"abstract":"Soft robots have been investigated for various applications due to their inherently superior deformability and flexibility compared to rigid-link robots. However, these robots struggle to perform tasks that require on-demand stiffness, that is, exerting sufficient forces within allowable deflection. In addition, the soft and compliant materials also introduce large deformation and non-negligible nonlinearity, which makes the stiffness analysis and modelling of soft robots fundamentally challenging. This paper proposes a modelling framework to investigate the underlying stiffness and the equivalent compliance properties of soft robots under different configurations. Firstly, a modelling and analysis methodology is described based on Lie theory. Here, we derive two sets (the piecewise constant curvature and Cosserat rod model) of compliance models. Furthermore, the methodology can accommodate the nonlinear responses (e.g., bending angles) resulting from elongation of robots. Using this proposed methodology, the general Cartesian stiffness or compliance matrix can be derived and used for configuration-dependent stiffness analysis. The proposed framework is then instantiated and implemented on fluidic-driven soft continuum robots. The efficacy and modelling accuracy of the proposed methodology are validated using both simulations and experiments.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"41 s194","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138622622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-29DOI: 10.1177/02783649231212929
Katsushi Ikeuchi, Naoki Wake, Kazuhiro Sasabuchi, Jun Takamatsu
The learning-from-observation (LfO) paradigm allows a robot to learn how to perform actions by observing human actions. Previous research in top-down learning-from-observation has mainly focused on the industrial domain, which consists only of the real physical constraints between a manipulated tool and the robot’s working environment. To extend this paradigm to the household domain, which consists of imaginary constraints derived from human common sense, we introduce the idea of semantic constraints, which are represented similarly to the physical constraints by defining an imaginary contact with an imaginary environment. By studying the transitions between contact states under physical and semantic constraints, we derive a necessary and sufficient set of task representations that provides the upper bound of the possible task set. We then apply the task representations to analyze various actions in top-rated household YouTube videos and real home cooking recordings, classify frequently occurring constraint patterns into physical, semantic, and multi-step task groups, and determine a subset that covers standard household actions. Finally, we design and implement task models, corresponding to these task representations in the subset, with the necessary daemon functions to collect the necessary parameters to perform the corresponding household actions. Our results provide promising directions for incorporating common sense into the robot teaching literature.
{"title":"Semantic constraints to represent common sense required in household actions for multimodal learning-from-observation robot","authors":"Katsushi Ikeuchi, Naoki Wake, Kazuhiro Sasabuchi, Jun Takamatsu","doi":"10.1177/02783649231212929","DOIUrl":"https://doi.org/10.1177/02783649231212929","url":null,"abstract":"The learning-from-observation (LfO) paradigm allows a robot to learn how to perform actions by observing human actions. Previous research in top-down learning-from-observation has mainly focused on the industrial domain, which consists only of the real physical constraints between a manipulated tool and the robot’s working environment. To extend this paradigm to the household domain, which consists of imaginary constraints derived from human common sense, we introduce the idea of semantic constraints, which are represented similarly to the physical constraints by defining an imaginary contact with an imaginary environment. By studying the transitions between contact states under physical and semantic constraints, we derive a necessary and sufficient set of task representations that provides the upper bound of the possible task set. We then apply the task representations to analyze various actions in top-rated household YouTube videos and real home cooking recordings, classify frequently occurring constraint patterns into physical, semantic, and multi-step task groups, and determine a subset that covers standard household actions. Finally, we design and implement task models, corresponding to these task representations in the subset, with the necessary daemon functions to collect the necessary parameters to perform the corresponding household actions. Our results provide promising directions for incorporating common sense into the robot teaching literature.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139211456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In endoscopic submucosal dissection (ESD), the gastrointestinal (GI) tract warrants the surgical instruments to navigate through a long, narrow and tortuous endoscope. This poses a great challenge in developing ESD instruments with small dimensions, flexibility, and high distal dexterity. In this work, we propose the first Transendoscopic Flexible Parallel Continuum Robotic mechanism to develop a miniature dexterous flexible-stiff-balanced Wrist (FPCW). Besides, it can steer multifunctional instruments of diameters 2.5 mm to 3.5 mm, including the electrosurgical knife, injection needle, and forceps. Our FPCW instruments are adaptable to commercially available dual-channel endoscopes (diameter: <12 mm, channel width: 2.8 mm and around 3.8 mm). Furthermore, we develop a surgical telerobotic system, called DREAMS (Dual-arm Robotic Endoscopic Assistant for Minimally Invasive Surgery), by using our smallest FPCW instruments for bimanual ESD procedures. First, we conduct a series of experiments to determine the FPCW’s design and kinematics parameters and to verify the mechanical properties of the FPCW instruments’ prototypes, including workspace, stiffness, strength, and teleoperation accuracy. Second, we validate the functionality of the FPCW instruments through ex-vivo tests by performing ESD steps on porcine stomachs. Finally, we perform an invivo test on a live porcine model and showcase that our developed DREAMS can be teleoperated intuitively to perform bimanual ESD efficiently with an average dissection speed of 108.95 mm2/min at the greater curvature in gastric body, which demonstrates that our DREAMS has satisfactory maneuverability as well as accuracy and is more competitive than counterpart robotic systems.
{"title":"Transendoscopic flexible parallel continuum robotic mechanism for bimanual endoscopic submucosal dissection","authors":"Huxin Gao, Xiaoxiao Yang, X. Xiao, Xiaolong Zhu, Tao Zhang, Cheng Hou, Huicong Liu, Max Q.-H. Meng, Lining Sun, Xiuli Zuo, Yanqing Li, Hongliang Ren","doi":"10.1177/02783649231209338","DOIUrl":"https://doi.org/10.1177/02783649231209338","url":null,"abstract":"In endoscopic submucosal dissection (ESD), the gastrointestinal (GI) tract warrants the surgical instruments to navigate through a long, narrow and tortuous endoscope. This poses a great challenge in developing ESD instruments with small dimensions, flexibility, and high distal dexterity. In this work, we propose the first Transendoscopic Flexible Parallel Continuum Robotic mechanism to develop a miniature dexterous flexible-stiff-balanced Wrist (FPCW). Besides, it can steer multifunctional instruments of diameters 2.5 mm to 3.5 mm, including the electrosurgical knife, injection needle, and forceps. Our FPCW instruments are adaptable to commercially available dual-channel endoscopes (diameter: <12 mm, channel width: 2.8 mm and around 3.8 mm). Furthermore, we develop a surgical telerobotic system, called DREAMS (Dual-arm Robotic Endoscopic Assistant for Minimally Invasive Surgery), by using our smallest FPCW instruments for bimanual ESD procedures. First, we conduct a series of experiments to determine the FPCW’s design and kinematics parameters and to verify the mechanical properties of the FPCW instruments’ prototypes, including workspace, stiffness, strength, and teleoperation accuracy. Second, we validate the functionality of the FPCW instruments through ex-vivo tests by performing ESD steps on porcine stomachs. Finally, we perform an invivo test on a live porcine model and showcase that our developed DREAMS can be teleoperated intuitively to perform bimanual ESD efficiently with an average dissection speed of 108.95 mm2/min at the greater curvature in gastric body, which demonstrates that our DREAMS has satisfactory maneuverability as well as accuracy and is more competitive than counterpart robotic systems.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"22 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139260987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-16DOI: 10.1177/02783649231215095
K. Nielsen, Gustaf Hendeby
A static world assumption is often used when considering the simultaneous localization and mapping (SLAM) problem. In reality, especially when long-term autonomy is the objective, this is not a valid assumption. This paper studies a scenario where landmarks can occupy multiple discrete positions at different points in time, where each possible position is added to a multi-hypothesis map representation. A selector-mixture distribution is introduced and used in the observation model. Each landmark position hypothesis is associated with one component in the mixture. The landmark movements are modeled by a discrete Markov chain and the Monte Carlo tree search algorithm is suggested to be used as component selector. The non-static environment model is further incorporated into the factor graph formulation of the SLAM problem and is solved by iterating between estimating discrete variables with a component selector and optimizing continuous variables with an efficient state-of-the-art nonlinear least squares SLAM solver. The proposed non-static SLAM system is validated in numerical simulation and with a publicly available dataset by showing that a non-static environment can successfully be navigated.
在考虑同步定位和绘图(SLAM)问题时,通常使用静态世界假设。在现实中,尤其是以长期自主为目标时,这种假设并不成立。本文研究了地标可能在不同时间点占据多个离散位置的情况,其中每个可能的位置都被添加到多假设地图表示中。观察模型中引入并使用了选择器-混合分布。每个地标位置假设都与混合物中的一个分量相关联。地标运动由离散马尔可夫链建模,建议使用蒙特卡洛树搜索算法作为分量选择器。非静态环境模型被进一步纳入 SLAM 问题的因子图表述中,并通过使用分量选择器估计离散变量和使用高效的最先进非线性最小二乘 SLAM 求解器优化连续变量之间的迭代来解决。所提出的非静态 SLAM 系统通过数值模拟和公开数据集进行了验证,表明非静态环境可以成功导航。
{"title":"Hypothesis selection with Monte Carlo tree search for feature-based simultaneous localization and mapping in non-static environments","authors":"K. Nielsen, Gustaf Hendeby","doi":"10.1177/02783649231215095","DOIUrl":"https://doi.org/10.1177/02783649231215095","url":null,"abstract":"A static world assumption is often used when considering the simultaneous localization and mapping (SLAM) problem. In reality, especially when long-term autonomy is the objective, this is not a valid assumption. This paper studies a scenario where landmarks can occupy multiple discrete positions at different points in time, where each possible position is added to a multi-hypothesis map representation. A selector-mixture distribution is introduced and used in the observation model. Each landmark position hypothesis is associated with one component in the mixture. The landmark movements are modeled by a discrete Markov chain and the Monte Carlo tree search algorithm is suggested to be used as component selector. The non-static environment model is further incorporated into the factor graph formulation of the SLAM problem and is solved by iterating between estimating discrete variables with a component selector and optimizing continuous variables with an efficient state-of-the-art nonlinear least squares SLAM solver. The proposed non-static SLAM system is validated in numerical simulation and with a publicly available dataset by showing that a non-static environment can successfully be navigated.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"5 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139269742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1177/02783649231199044
M. A. Hsieh, Dylan A. Shell
{"title":"Selected papers from RSS2021","authors":"M. A. Hsieh, Dylan A. Shell","doi":"10.1177/02783649231199044","DOIUrl":"https://doi.org/10.1177/02783649231199044","url":null,"abstract":"","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"45 1","pages":"703 - 704"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139343624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}