Contact force quality is one of the most critical factors for safe and effective lesion formation during cardiac ablation. The contact force and contact stability plays important roles in determining the lesion size and creating a gap-free lesion. In this paper, the contact stability of a novel magnetic resonance imaging (MRI)-actuated robotic catheter under tissue surface motion is studied. The robotic catheter is modeled using a pseudo-rigid-body model, and the contact model under surface constraint is provided. Two contact force control schemes to improve the contact stability of the catheter under heart surface motions are proposed and their performance are evaluated in simulation.
Flexible medical instruments, such as Continuum Dexterous Manipulators (CDM), constitute an important class of tools for minimally invasive surgery. Accurate CDM shape reconstruction during surgery is of great importance, yet a challenging task. Fiber Bragg grating (FBG) sensors have demonstrated great potential in shape sensing and consequently tip position estimation of CDMs. However, due to the limited number of sensing locations, these sensors can only accurately recover basic shapes, and become unreliable in the presence of obstacles or many inflection points such as s-bends. Optical Frequency Domain Reflectometry (OFDR), on the other hand, can achieve much higher spatial resolution, and can therefore accurately reconstruct more complex shapes. Additionally, Random Optical Gratings by Ultraviolet laser Exposure (ROGUEs) can be written in the fibers to increase signal to noise ratio of the sensors. In this comparison study, the tip position error is used as a metric to compare both FBG and OFDR shape reconstructions for a 35 mm long CDM developed for orthopedic surgeries, using a pair of stereo cameras as ground truth. Three sets of experiments were conducted to measure the accuracy of each technique in various surgical scenarios. The tip position error for the OFDR (and FBG) technique was found to be 0.32 (0.83) mm in free-bending environment, 0.41 (0.80) mm when interacting with obstacles, and 0.45 (2.27) mm in s-bending. Moreover, the maximum tip position error remains sub-millimeter for the OFDR reconstruction, while it reaches 3.40 mm for FBG reconstruction. These results propose a cost-effective, robust and more accurate alternative to FBG sensors for reconstructing complex CDM shapes.
A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina in order to perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tool-tip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely unexplored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in physical experiments using a silicone eye phantom. We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 μm accuracy in physical experiments and 94 μm in simulation on average, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.