Pub Date : 2025-07-23DOI: 10.1126/scirobotics.adt0187
Samuel Schmidgall, Justin D. Opfermann, Ji Woong Kim, Axel Krieger
State-of-the-art surgery is performed robotically under direct surgeon control. However, surgical outcome is limited by the availability, skill, and day-to-day performance of the operating surgeon. What will it take to improve surgical outcomes independent of human limitations? In this Review, we explore the technological evolution of robotic surgery and current trends in robotics and artificial intelligence that could lead to a future generation of autonomous surgical robots that will outperform today’s teleoperated robots.
{"title":"Will your next surgeon be a robot? Autonomy and AI in robotic surgery","authors":"Samuel Schmidgall, Justin D. Opfermann, Ji Woong Kim, Axel Krieger","doi":"10.1126/scirobotics.adt0187","DOIUrl":"https://doi.org/10.1126/scirobotics.adt0187","url":null,"abstract":"State-of-the-art surgery is performed robotically under direct surgeon control. However, surgical outcome is limited by the availability, skill, and day-to-day performance of the operating surgeon. What will it take to improve surgical outcomes independent of human limitations? In this Review, we explore the technological evolution of robotic surgery and current trends in robotics and artificial intelligence that could lead to a future generation of autonomous surgical robots that will outperform today’s teleoperated robots.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"16 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-16DOI: 10.1126/scirobotics.adt3093
Yonghao Long,Anran Lin,Derek Hang Chun Kwok,Lin Zhang,Zhenya Yang,Kejian Shi,Lei Song,Jiawei Fu,Hongbin Lin,Wang Wei,Kai Chen,Xiangyu Chu,Yang Hu,Hon Chi Yip,Philip Wai Yan Chiu,Peter Kazanzides,Russell H Taylor,Yunhui Liu,Zihan Chen,Zerui Wang, Samuel Kwok Wai Au,Qi Dou
Surgical robots capable of autonomously performing various tasks could enhance efficiency and augment human productivity in addressing clinical needs. Although current solutions have automated specific actions within defined contexts, they are challenging to generalize across diverse environments in general surgery. Embodied intelligence enables general-purpose robot learning with applications for daily tasks, yet its application in the medical domain remains limited. We introduced an open-source surgical embodied intelligence simulator for an interactive environment to develop reinforcement learning methods for minimally invasive surgical robots. Using such embodied artificial intelligence, this study further addresses surgical task automation, enabling zero-shot transfer of simulation-trained policies to real-world scenarios. The proposed method encompasses visual parsing, a perceptual regressor, policy learning, and a visual servoing controller, forming a paradigm that combines the advantages of data-driven policy and classic controller. The visual parsing uses stereo depth estimation and image segmentation with a visual foundation model to handle complex scenes. Experiments demonstrated autonomy in seven game-based skill training tasks on the da Vinci Research Kit, with a proof-of-concept study on haptic-assisted skill training as a practical application. Moreover, we conducted automation of five surgical assistive tasks with the Sentire surgical system on ex vivo animal tissues with various scenes, object sizes, instrument types, and illuminations. The learned policies were also validated in a live-animal trial for three tasks in dynamic in vivo surgical environments. We hope this open-source infrastructure, coupled with a general-purpose learning paradigm, will inspire and facilitate future research on embodied intelligence toward autonomous surgical robots.
{"title":"Surgical embodied intelligence for generalized task autonomy in laparoscopic robot-assisted surgery.","authors":"Yonghao Long,Anran Lin,Derek Hang Chun Kwok,Lin Zhang,Zhenya Yang,Kejian Shi,Lei Song,Jiawei Fu,Hongbin Lin,Wang Wei,Kai Chen,Xiangyu Chu,Yang Hu,Hon Chi Yip,Philip Wai Yan Chiu,Peter Kazanzides,Russell H Taylor,Yunhui Liu,Zihan Chen,Zerui Wang, Samuel Kwok Wai Au,Qi Dou","doi":"10.1126/scirobotics.adt3093","DOIUrl":"https://doi.org/10.1126/scirobotics.adt3093","url":null,"abstract":"Surgical robots capable of autonomously performing various tasks could enhance efficiency and augment human productivity in addressing clinical needs. Although current solutions have automated specific actions within defined contexts, they are challenging to generalize across diverse environments in general surgery. Embodied intelligence enables general-purpose robot learning with applications for daily tasks, yet its application in the medical domain remains limited. We introduced an open-source surgical embodied intelligence simulator for an interactive environment to develop reinforcement learning methods for minimally invasive surgical robots. Using such embodied artificial intelligence, this study further addresses surgical task automation, enabling zero-shot transfer of simulation-trained policies to real-world scenarios. The proposed method encompasses visual parsing, a perceptual regressor, policy learning, and a visual servoing controller, forming a paradigm that combines the advantages of data-driven policy and classic controller. The visual parsing uses stereo depth estimation and image segmentation with a visual foundation model to handle complex scenes. Experiments demonstrated autonomy in seven game-based skill training tasks on the da Vinci Research Kit, with a proof-of-concept study on haptic-assisted skill training as a practical application. Moreover, we conducted automation of five surgical assistive tasks with the Sentire surgical system on ex vivo animal tissues with various scenes, object sizes, instrument types, and illuminations. The learned policies were also validated in a live-animal trial for three tasks in dynamic in vivo surgical environments. We hope this open-source infrastructure, coupled with a general-purpose learning paradigm, will inspire and facilitate future research on embodied intelligence toward autonomous surgical robots.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"7 1","pages":"eadt3093"},"PeriodicalIF":25.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144645892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-16DOI: 10.1126/scirobotics.adt6471
Marta Weber, Kee B. Park, Salim Afshar
The integration of robotics and artificial intelligence holds promise for improving access to surgical care worldwide.
机器人和人工智能的结合有望改善全世界的外科护理。
{"title":"Equalizing access: How robotics and AI can transform surgical care worldwide","authors":"Marta Weber, Kee B. Park, Salim Afshar","doi":"10.1126/scirobotics.adt6471","DOIUrl":"10.1126/scirobotics.adt6471","url":null,"abstract":"<div >The integration of robotics and artificial intelligence holds promise for improving access to surgical care worldwide.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 104","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144646029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-09DOI: 10.1126/scirobotics.adt0684
Michael Yip
Foundation models in robotics are here to stay, but can surgical robotics keep up with their data-intense requirement?
机器人技术的基础模型将继续存在,但手术机器人能跟上他们对数据密集型的需求吗?
{"title":"The robot will see you now: Foundation models are the path forward for autonomous robotic surgery","authors":"Michael Yip","doi":"10.1126/scirobotics.adt0684","DOIUrl":"10.1126/scirobotics.adt0684","url":null,"abstract":"<div >Foundation models in robotics are here to stay, but can surgical robotics keep up with their data-intense requirement?</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 104","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144594441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-09DOI: 10.1126/scirobotics.adt5254
Ji Woong (Brian) Kim, Juo-Tung Chen, Pascal Hansen, Lucy Xiaoyang Shi, Antony Goldenberg, Samuel Schmidgall, Paul Maria Scheikl, Anton Deguet, Brandon M. White, De Ru Tsai, Richard Jaepyeong Cha, Jeffrey Jopling, Chelsea Finn, Axel Krieger
Research on autonomous surgery has largely focused on simple task automation in controlled environments. However, real-world surgical applications demand dexterous manipulation over extended durations and robust generalization to the inherent variability of human tissue. These challenges remain difficult to address using existing logic-based or conventional end-to-end learning strategies. To address this gap, we propose a hierarchical framework for performing dexterous, long-horizon surgical steps. Our approach uses a high-level policy for task planning and a low-level policy for generating low-level trajectories. The high-level planner plans in language space, generating task-level or corrective instructions that guide the robot through the long-horizon steps and help recover from errors made by the low-level policy. We validated our framework through ex vivo experiments on cholecystectomy, a commonly practiced minimally invasive procedure, and conducted ablation studies to evaluate key components of the system. Our method achieves a 100% success rate across eight different ex vivo gallbladders, operating fully autonomously without human intervention. The hierarchical approach improved the policy’s ability to recover from suboptimal states that are inevitable in the highly dynamic environment of realistic surgical applications. This work demonstrates step-level autonomy in a surgical procedure, marking a milestone toward clinical deployment of autonomous surgical systems.
{"title":"SRT-H: A hierarchical framework for autonomous surgery via language-conditioned imitation learning","authors":"Ji Woong (Brian) Kim, Juo-Tung Chen, Pascal Hansen, Lucy Xiaoyang Shi, Antony Goldenberg, Samuel Schmidgall, Paul Maria Scheikl, Anton Deguet, Brandon M. White, De Ru Tsai, Richard Jaepyeong Cha, Jeffrey Jopling, Chelsea Finn, Axel Krieger","doi":"10.1126/scirobotics.adt5254","DOIUrl":"https://doi.org/10.1126/scirobotics.adt5254","url":null,"abstract":"Research on autonomous surgery has largely focused on simple task automation in controlled environments. However, real-world surgical applications demand dexterous manipulation over extended durations and robust generalization to the inherent variability of human tissue. These challenges remain difficult to address using existing logic-based or conventional end-to-end learning strategies. To address this gap, we propose a hierarchical framework for performing dexterous, long-horizon surgical steps. Our approach uses a high-level policy for task planning and a low-level policy for generating low-level trajectories. The high-level planner plans in language space, generating task-level or corrective instructions that guide the robot through the long-horizon steps and help recover from errors made by the low-level policy. We validated our framework through ex vivo experiments on cholecystectomy, a commonly practiced minimally invasive procedure, and conducted ablation studies to evaluate key components of the system. Our method achieves a 100% success rate across eight different ex vivo gallbladders, operating fully autonomously without human intervention. The hierarchical approach improved the policy’s ability to recover from suboptimal states that are inevitable in the highly dynamic environment of realistic surgical applications. This work demonstrates step-level autonomy in a surgical procedure, marking a milestone toward clinical deployment of autonomous surgical systems.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"89 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144586905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-09DOI: 10.1126/scirobotics.adt1874
Ron Alterovitz, Janine Hoelscher, Alan Kuntz
Safely and accurately navigating needles percutaneously or endoscopically to sites deep within the body is essential for many medical procedures, from biopsies to localized drug deliveries to tumor ablations. The advent of image guidance decades ago gave physicians information about the patient’s anatomy. We are now entering the era of AI (artificial intelligence) guidance, where AI can automatically analyze images, identify targets and obstacles, compute safe trajectories, and autonomously navigate a needle to a site with unprecedented accuracy and precision. We survey recent advances in the building blocks of AI guidance for medical needle deployment robots (perceiving anatomy, planning motions, perceiving instrument state, and performing motions) and discuss research opportunities to maximize the benefits of AI guidance for patient care.
{"title":"Medical needles in the hands of AI: Advancing toward autonomous robotic navigation","authors":"Ron Alterovitz, Janine Hoelscher, Alan Kuntz","doi":"10.1126/scirobotics.adt1874","DOIUrl":"https://doi.org/10.1126/scirobotics.adt1874","url":null,"abstract":"Safely and accurately navigating needles percutaneously or endoscopically to sites deep within the body is essential for many medical procedures, from biopsies to localized drug deliveries to tumor ablations. The advent of image guidance decades ago gave physicians information about the patient’s anatomy. We are now entering the era of AI (artificial intelligence) guidance, where AI can automatically analyze images, identify targets and obstacles, compute safe trajectories, and autonomously navigate a needle to a site with unprecedented accuracy and precision. We survey recent advances in the building blocks of AI guidance for medical needle deployment robots (perceiving anatomy, planning motions, perceiving instrument state, and performing motions) and discuss research opportunities to maximize the benefits of AI guidance for patient care.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144586904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1126/scirobotics.adq5046
Yifan Zhu, Mei Hao, Xupeng Zhu, Quentin Bateux, Alex Wong, Aaron M. Dollar
Force-sensing capabilities are essential for robot manipulation systems. However, commonly used wrist-mounted force/torque sensors are heavy, fragile, and expensive, and tactile sensors require adding fragile circuitry to the robot fingers while only providing force information local to the contact. Here, we present a vision-based contact force estimator that serves as a more cost-effective and easier-to-implement alternative to existing force sensors by leveraging deformations of compliant hands upon contacts when compliant hands are in use. Our approach uses an estimator that visually observes a specialized compliant robot hand (available open source with easy fabrication through 3D printing) and predicts the contact force on the basis of its elastic deformation upon external forces. Because using wrist-mounted cameras to observe the gripper is common for robot manipulation systems, our method can obtain additional force information provided that the gripper is compliant. We optimized our compliant hand to minimize friction and avoid singularities in finger configurations, and we introduced memory to the estimator to combat the partial observability of the contact forces from the remaining friction and hysteresis. In addition, the estimator was made robust to background distractions and finger occlusions using vision foundation models to segment out the fingers. Although it is less accurate and slower than commercial force/torque sensors, we experimentally demonstrated the accuracy and robustness of our estimator (achieving between 0.2 newton and 0.4 newton error) and its utility during a variety of manipulation tasks using the gripper in the presence of noisy backgrounds and occlusions.
{"title":"Forces for free: Vision-based contact force estimation with a compliant hand","authors":"Yifan Zhu, Mei Hao, Xupeng Zhu, Quentin Bateux, Alex Wong, Aaron M. Dollar","doi":"10.1126/scirobotics.adq5046","DOIUrl":"10.1126/scirobotics.adq5046","url":null,"abstract":"<div >Force-sensing capabilities are essential for robot manipulation systems. However, commonly used wrist-mounted force/torque sensors are heavy, fragile, and expensive, and tactile sensors require adding fragile circuitry to the robot fingers while only providing force information local to the contact. Here, we present a vision-based contact force estimator that serves as a more cost-effective and easier-to-implement alternative to existing force sensors by leveraging deformations of compliant hands upon contacts when compliant hands are in use. Our approach uses an estimator that visually observes a specialized compliant robot hand (available open source with easy fabrication through 3D printing) and predicts the contact force on the basis of its elastic deformation upon external forces. Because using wrist-mounted cameras to observe the gripper is common for robot manipulation systems, our method can obtain additional force information provided that the gripper is compliant. We optimized our compliant hand to minimize friction and avoid singularities in finger configurations, and we introduced memory to the estimator to combat the partial observability of the contact forces from the remaining friction and hysteresis. In addition, the estimator was made robust to background distractions and finger occlusions using vision foundation models to segment out the fingers. Although it is less accurate and slower than commercial force/torque sensors, we experimentally demonstrated the accuracy and robustness of our estimator (achieving between 0.2 newton and 0.4 newton error) and its utility during a variety of manipulation tasks using the gripper in the presence of noisy backgrounds and occlusions.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 103","pages":""},"PeriodicalIF":26.1,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1126/scirobotics.adt1591
Jack R. Williams, Chance F. Cuddeback, Shanpu Fang, Daniel Colley, Noah Enlow, Payton Cox, Paul Pridham, Zachary F. Lerner
Although the field of wearable robotic exoskeletons is rapidly expanding, there are several barriers to entry that discourage many from pursuing research in this area, ultimately hindering growth. Chief among these is the lengthy and costly development process to get an exoskeleton from conception to implementation and the necessity for a broad set of expertise. In addition, many exoskeletons are designed for a specific utility and are confined to the laboratory environment, limiting the flexibility of the designed system to adapt to answer new questions and explore new domains. To address these barriers, we present OpenExo, an open-source modular untethered exoskeleton framework that provides access to all aspects of the design process, including software, electronics, hardware, and control schemes. To demonstrate the utility of this exoskeleton framework, we performed benchtop and experimental validation testing with the system across multiple configurations, including hip-only incline assistance, ankle-only indoor and outdoor assistance, hip-and-ankle load carriage assistance, and elbow-only weightlifting assistance. All aspects of the software architecture, electrical components, hip and Bowden-cable transmission designs, and control schemes are freely available for other researchers to access, use, and modify when looking to address research questions in the field of wearable exoskeletons. Our hope is that OpenExo will accelerate the development and testing of new exoskeleton designs and control schemes while simultaneously encouraging others, including those who would have been turned away from entering the field, to explore new and unique research questions.
{"title":"OpenExo: An open-source modular exoskeleton to augment human function","authors":"Jack R. Williams, Chance F. Cuddeback, Shanpu Fang, Daniel Colley, Noah Enlow, Payton Cox, Paul Pridham, Zachary F. Lerner","doi":"10.1126/scirobotics.adt1591","DOIUrl":"10.1126/scirobotics.adt1591","url":null,"abstract":"<div >Although the field of wearable robotic exoskeletons is rapidly expanding, there are several barriers to entry that discourage many from pursuing research in this area, ultimately hindering growth. Chief among these is the lengthy and costly development process to get an exoskeleton from conception to implementation and the necessity for a broad set of expertise. In addition, many exoskeletons are designed for a specific utility and are confined to the laboratory environment, limiting the flexibility of the designed system to adapt to answer new questions and explore new domains. To address these barriers, we present OpenExo, an open-source modular untethered exoskeleton framework that provides access to all aspects of the design process, including software, electronics, hardware, and control schemes. To demonstrate the utility of this exoskeleton framework, we performed benchtop and experimental validation testing with the system across multiple configurations, including hip-only incline assistance, ankle-only indoor and outdoor assistance, hip-and-ankle load carriage assistance, and elbow-only weightlifting assistance. All aspects of the software architecture, electrical components, hip and Bowden-cable transmission designs, and control schemes are freely available for other researchers to access, use, and modify when looking to address research questions in the field of wearable exoskeletons. Our hope is that OpenExo will accelerate the development and testing of new exoskeleton designs and control schemes while simultaneously encouraging others, including those who would have been turned away from entering the field, to explore new and unique research questions.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 103","pages":""},"PeriodicalIF":26.1,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adt1591","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1126/scirobotics.adz2721
Robin R. Murphy
Death of the Author: A Novel imagines the influence of an experimental exoskeleton on a disabled author and her family.
作者之死:一部小说想象了实验性外骨骼对一位残疾作家及其家人的影响。
{"title":"The greatest challenge for prosthetics may be social, not neural, connections","authors":"Robin R. Murphy","doi":"10.1126/scirobotics.adz2721","DOIUrl":"10.1126/scirobotics.adz2721","url":null,"abstract":"<div ><i>Death of the Author: A Novel</i> imagines the influence of an experimental exoskeleton on a disabled author and her family.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 103","pages":""},"PeriodicalIF":26.1,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1126/scirobotics.adt0720
Haidong Yu, Xurui Liu, Yabin Zhang, Jie Shen, Xijun Liu, Shubo Liu, Xiangyu Wang, Bonan Sun, Huihui Du, Lin Xu, Bingsuo Zou, Jianning Ding, Qingsong Xu, Li Zhang, Ben Wang
Microrobotic techniques are promising for treating biofilm infections located deep within the human body. However, the presence of highly viscous pus presents a formidable biological barrier, severely restricting targeted and minimally invasive treatments. In addition, conventional antibacterial agents exhibit limited payload integration with microrobotic systems, further compromising therapeutic efficiency. In this study, we propose a photocatalytic microrobot through a magnetically guided, optical fiber–assisted therapeutic platform specifically designed to treat bacterial infections in deep mucosal cavities. The microrobots comprising copper (Cu) single atom–doped bismuth oxoiodide (BiOI), termed CBMRs, can be guided and tracked by real-time x-ray imaging. Under external magnetic actuation, the illuminated region from the magnetically guided optical fiber synchronously follows the CBMR swarm, enabling effective antibacterial action at targeted infection sites. Upon continuous visible-light irradiation, the resultant photothermal effect substantially reduces the viscosity of pus on inflamed mucosal tissues, enhancing the penetration capability of the CBMR swarm by more than threefold compared with baseline conditions. Concurrently, atomic-level design of CBMRs facilitates robust generation of reactive oxygen species, enabling efficient biofilm disruption and reductions in bacterial viability. We validated the effectiveness of this integrated optical fiber–assisted microrobotic platform in a rabbit sinusitis model in vivo, demonstrating its potential for clinically relevant infection therapy.
{"title":"Photocatalytic microrobots for treating bacterial infections deep within sinuses","authors":"Haidong Yu, Xurui Liu, Yabin Zhang, Jie Shen, Xijun Liu, Shubo Liu, Xiangyu Wang, Bonan Sun, Huihui Du, Lin Xu, Bingsuo Zou, Jianning Ding, Qingsong Xu, Li Zhang, Ben Wang","doi":"10.1126/scirobotics.adt0720","DOIUrl":"10.1126/scirobotics.adt0720","url":null,"abstract":"<div >Microrobotic techniques are promising for treating biofilm infections located deep within the human body. However, the presence of highly viscous pus presents a formidable biological barrier, severely restricting targeted and minimally invasive treatments. In addition, conventional antibacterial agents exhibit limited payload integration with microrobotic systems, further compromising therapeutic efficiency. In this study, we propose a photocatalytic microrobot through a magnetically guided, optical fiber–assisted therapeutic platform specifically designed to treat bacterial infections in deep mucosal cavities. The microrobots comprising copper (Cu) single atom–doped bismuth oxoiodide (BiOI), termed CBMRs, can be guided and tracked by real-time x-ray imaging. Under external magnetic actuation, the illuminated region from the magnetically guided optical fiber synchronously follows the CBMR swarm, enabling effective antibacterial action at targeted infection sites. Upon continuous visible-light irradiation, the resultant photothermal effect substantially reduces the viscosity of pus on inflamed mucosal tissues, enhancing the penetration capability of the CBMR swarm by more than threefold compared with baseline conditions. Concurrently, atomic-level design of CBMRs facilitates robust generation of reactive oxygen species, enabling efficient biofilm disruption and reductions in bacterial viability. We validated the effectiveness of this integrated optical fiber–assisted microrobotic platform in a rabbit sinusitis model in vivo, demonstrating its potential for clinically relevant infection therapy.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 103","pages":""},"PeriodicalIF":26.1,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}