Retinal vein cannulation is a promising approach for treating retinal vein occlusion that involves injecting medicine into the occluded vessel to dissolve the clot. The approach remains largely unexploited clinically due to surgeon limitations in detecting interaction forces between surgical tools and retinal tissue. In this paper, a dual force constraint controller for robot-assisted retinal surgery was presented to keep the tool-to-vessel forces and tool-to-sclera forces below prescribed thresholds. A cannulation tool and forceps with dual force-sensing capability were developed and used to measure force information fed into the robot controller, which was implemented on existing Steady Hand Eye Robot platforms. The robotic system facilitates retinal vein cannulation by allowing a user to grasp the target vessel with the forceps and then enter the vessel with the cannula. The system was evaluated on an eye phantom. The results showed that, while the eyeball was subjected to rotational disturbances, the proposed controller actuates the robotic manipulators to maintain the average tool-to-vessel force at 10.9 mN and 13.1 mN and the average tool-to-sclera force at 38.1 mN and 41.2 mN for the cannula and the forcpes, respectively. Such small tool-to-tissue forces are acceptable to avoid retinal tissue injury. Additionally, two clinicians participated in a preliminary user study of the bimanual cannulation demonstrating that the operation time and tool-to-tissue forces are significantly decreased when using the bimanual robotic system as compared to freehand performance.
This paper reports the development of a fully actuated body-mounted robotic assistant for MRI-guided low back pain injection. The robot is designed with a 4-DOF needle alignment module and a 2-DOF remotely actuated needle driver module. The 6-DOF fully actuated robot can operate inside the scanner bore during imaging; hence, minimizing the need of moving the patient in or out of the scanner during the procedure, and thus potentially reducing the procedure time and streamlining the workflow. The robot is built with a lightweight and compact structure that can be attached directly to the patient's lower back using straps; therefore, attenuating the effect of patient motion by moving with the patient. The novel remote actuation design of the needle driver module with beaded chain transmission can reduce the weight and profile on the patient, as well as minimize the imaging degradation caused by the actuation electronics. The free space positioning accuracy of the system was evaluated with an optical tracking system, demonstrating the mean absolute errors (MAE) of the tip position to be 0.99±0.46 mm and orientation to be 0.99±0.65°. Qualitative imaging quality evaluation was performed on a human volunteer, revealing minimal visible image degradation that should not affect the procedure. The mounting stability of the system was assessed on a human volunteer, indicating the 3D position variation of target movement with respect to the robot frame to be less than 0.7 mm.
We present a parallel robot mechanism and the constitutive laws that govern the deformation of its constituent soft actuators. Our ultimate goal is the real-time motion-correction of a patient's head deviation from a target pose where the soft actuators control the position of the patient's cranial region on a treatment machine. We describe the mechanism, derive the stress-strain constitutive laws for the individual actuators and the inverse kinematics that prescribes a given deformation, and then present simulation results that validate our mathematical formulation. Our results demonstrate deformations consistent with our radially symmetric displacement formulation under a finite elastic deformation framework.
Contact force quality is one of the most critical factors for safe and effective lesion formation during cardiac ablation. The contact force and contact stability plays important roles in determining the lesion size and creating a gap-free lesion. In this paper, the contact stability of a novel magnetic resonance imaging (MRI)-actuated robotic catheter under tissue surface motion is studied. The robotic catheter is modeled using a pseudo-rigid-body model, and the contact model under surface constraint is provided. Two contact force control schemes to improve the contact stability of the catheter under heart surface motions are proposed and their performance are evaluated in simulation.
Flexible medical instruments, such as Continuum Dexterous Manipulators (CDM), constitute an important class of tools for minimally invasive surgery. Accurate CDM shape reconstruction during surgery is of great importance, yet a challenging task. Fiber Bragg grating (FBG) sensors have demonstrated great potential in shape sensing and consequently tip position estimation of CDMs. However, due to the limited number of sensing locations, these sensors can only accurately recover basic shapes, and become unreliable in the presence of obstacles or many inflection points such as s-bends. Optical Frequency Domain Reflectometry (OFDR), on the other hand, can achieve much higher spatial resolution, and can therefore accurately reconstruct more complex shapes. Additionally, Random Optical Gratings by Ultraviolet laser Exposure (ROGUEs) can be written in the fibers to increase signal to noise ratio of the sensors. In this comparison study, the tip position error is used as a metric to compare both FBG and OFDR shape reconstructions for a 35 mm long CDM developed for orthopedic surgeries, using a pair of stereo cameras as ground truth. Three sets of experiments were conducted to measure the accuracy of each technique in various surgical scenarios. The tip position error for the OFDR (and FBG) technique was found to be 0.32 (0.83) mm in free-bending environment, 0.41 (0.80) mm when interacting with obstacles, and 0.45 (2.27) mm in s-bending. Moreover, the maximum tip position error remains sub-millimeter for the OFDR reconstruction, while it reaches 3.40 mm for FBG reconstruction. These results propose a cost-effective, robust and more accurate alternative to FBG sensors for reconstructing complex CDM shapes.
A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina in order to perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tool-tip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely unexplored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in physical experiments using a silicone eye phantom. We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 μm accuracy in physical experiments and 94 μm in simulation on average, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.