首页 > 最新文献

Science Robotics最新文献

英文 中文
Will your next surgeon be a robot? Autonomy and AI in robotic surgery 你的下一个外科医生会是机器人吗?机器人手术中的自主性和人工智能
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-07-23 DOI: 10.1126/scirobotics.adt0187
Samuel Schmidgall, Justin D. Opfermann, Ji Woong Kim, Axel Krieger
State-of-the-art surgery is performed robotically under direct surgeon control. However, surgical outcome is limited by the availability, skill, and day-to-day performance of the operating surgeon. What will it take to improve surgical outcomes independent of human limitations? In this Review, we explore the technological evolution of robotic surgery and current trends in robotics and artificial intelligence that could lead to a future generation of autonomous surgical robots that will outperform today’s teleoperated robots.
最先进的手术是在外科医生的直接控制下由机器人完成的。然而,手术结果受限于手术医生的可用性、技能和日常表现。怎样才能在不受人类限制的情况下提高手术效果?在这篇综述中,我们探讨了机器人手术的技术演变以及机器人和人工智能的当前趋势,这些趋势可能会导致未来一代的自主手术机器人,其性能将超过今天的远程操作机器人。
{"title":"Will your next surgeon be a robot? Autonomy and AI in robotic surgery","authors":"Samuel Schmidgall, Justin D. Opfermann, Ji Woong Kim, Axel Krieger","doi":"10.1126/scirobotics.adt0187","DOIUrl":"https://doi.org/10.1126/scirobotics.adt0187","url":null,"abstract":"State-of-the-art surgery is performed robotically under direct surgeon control. However, surgical outcome is limited by the availability, skill, and day-to-day performance of the operating surgeon. What will it take to improve surgical outcomes independent of human limitations? In this Review, we explore the technological evolution of robotic surgery and current trends in robotics and artificial intelligence that could lead to a future generation of autonomous surgical robots that will outperform today’s teleoperated robots.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"16 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surgical embodied intelligence for generalized task autonomy in laparoscopic robot-assisted surgery. 腹腔镜机器人辅助手术中广义任务自主的手术具身智能。
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-07-16 DOI: 10.1126/scirobotics.adt3093
Yonghao Long,Anran Lin,Derek Hang Chun Kwok,Lin Zhang,Zhenya Yang,Kejian Shi,Lei Song,Jiawei Fu,Hongbin Lin,Wang Wei,Kai Chen,Xiangyu Chu,Yang Hu,Hon Chi Yip,Philip Wai Yan Chiu,Peter Kazanzides,Russell H Taylor,Yunhui Liu,Zihan Chen,Zerui Wang, Samuel Kwok Wai Au,Qi Dou
Surgical robots capable of autonomously performing various tasks could enhance efficiency and augment human productivity in addressing clinical needs. Although current solutions have automated specific actions within defined contexts, they are challenging to generalize across diverse environments in general surgery. Embodied intelligence enables general-purpose robot learning with applications for daily tasks, yet its application in the medical domain remains limited. We introduced an open-source surgical embodied intelligence simulator for an interactive environment to develop reinforcement learning methods for minimally invasive surgical robots. Using such embodied artificial intelligence, this study further addresses surgical task automation, enabling zero-shot transfer of simulation-trained policies to real-world scenarios. The proposed method encompasses visual parsing, a perceptual regressor, policy learning, and a visual servoing controller, forming a paradigm that combines the advantages of data-driven policy and classic controller. The visual parsing uses stereo depth estimation and image segmentation with a visual foundation model to handle complex scenes. Experiments demonstrated autonomy in seven game-based skill training tasks on the da Vinci Research Kit, with a proof-of-concept study on haptic-assisted skill training as a practical application. Moreover, we conducted automation of five surgical assistive tasks with the Sentire surgical system on ex vivo animal tissues with various scenes, object sizes, instrument types, and illuminations. The learned policies were also validated in a live-animal trial for three tasks in dynamic in vivo surgical environments. We hope this open-source infrastructure, coupled with a general-purpose learning paradigm, will inspire and facilitate future research on embodied intelligence toward autonomous surgical robots.
能够自主执行各种任务的手术机器人可以提高效率,增加人类在解决临床需求方面的生产力。尽管目前的解决方案已经在定义的环境中实现了特定操作的自动化,但在普外科的不同环境中进行推广仍具有挑战性。具身智能使通用机器人学习应用于日常任务,但其在医疗领域的应用仍然有限。我们介绍了一个开源的外科嵌入式智能模拟器,用于交互式环境,以开发微创手术机器人的强化学习方法。利用这种人工智能,本研究进一步解决了手术任务自动化问题,实现了模拟训练策略到现实世界场景的零射击转移。提出的方法包括视觉解析、感知回归器、策略学习和视觉伺服控制器,形成了一个结合数据驱动策略和经典控制器优点的范例。视觉解析采用立体深度估计和图像分割,结合视觉基础模型处理复杂场景。实验证明了达芬奇研究工具包在七个基于游戏的技能训练任务中的自主性,并将触觉辅助技能训练作为实际应用的概念验证研究。此外,我们使用sen整个手术系统在不同场景、物体大小、仪器类型和照明的离体动物组织上进行了五项手术辅助任务的自动化。学习策略也在动态体内手术环境中的三个任务的活体动物试验中得到验证。我们希望这个开源的基础设施,加上一个通用的学习范式,将激发和促进未来对自主手术机器人的具身智能的研究。
{"title":"Surgical embodied intelligence for generalized task autonomy in laparoscopic robot-assisted surgery.","authors":"Yonghao Long,Anran Lin,Derek Hang Chun Kwok,Lin Zhang,Zhenya Yang,Kejian Shi,Lei Song,Jiawei Fu,Hongbin Lin,Wang Wei,Kai Chen,Xiangyu Chu,Yang Hu,Hon Chi Yip,Philip Wai Yan Chiu,Peter Kazanzides,Russell H Taylor,Yunhui Liu,Zihan Chen,Zerui Wang, Samuel Kwok Wai Au,Qi Dou","doi":"10.1126/scirobotics.adt3093","DOIUrl":"https://doi.org/10.1126/scirobotics.adt3093","url":null,"abstract":"Surgical robots capable of autonomously performing various tasks could enhance efficiency and augment human productivity in addressing clinical needs. Although current solutions have automated specific actions within defined contexts, they are challenging to generalize across diverse environments in general surgery. Embodied intelligence enables general-purpose robot learning with applications for daily tasks, yet its application in the medical domain remains limited. We introduced an open-source surgical embodied intelligence simulator for an interactive environment to develop reinforcement learning methods for minimally invasive surgical robots. Using such embodied artificial intelligence, this study further addresses surgical task automation, enabling zero-shot transfer of simulation-trained policies to real-world scenarios. The proposed method encompasses visual parsing, a perceptual regressor, policy learning, and a visual servoing controller, forming a paradigm that combines the advantages of data-driven policy and classic controller. The visual parsing uses stereo depth estimation and image segmentation with a visual foundation model to handle complex scenes. Experiments demonstrated autonomy in seven game-based skill training tasks on the da Vinci Research Kit, with a proof-of-concept study on haptic-assisted skill training as a practical application. Moreover, we conducted automation of five surgical assistive tasks with the Sentire surgical system on ex vivo animal tissues with various scenes, object sizes, instrument types, and illuminations. The learned policies were also validated in a live-animal trial for three tasks in dynamic in vivo surgical environments. We hope this open-source infrastructure, coupled with a general-purpose learning paradigm, will inspire and facilitate future research on embodied intelligence toward autonomous surgical robots.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"7 1","pages":"eadt3093"},"PeriodicalIF":25.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144645892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equalizing access: How robotics and AI can transform surgical care worldwide 平等获取:机器人和人工智能如何改变全世界的外科护理。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-07-16 DOI: 10.1126/scirobotics.adt6471
Marta Weber, Kee B. Park, Salim Afshar
The integration of robotics and artificial intelligence holds promise for improving access to surgical care worldwide.
机器人和人工智能的结合有望改善全世界的外科护理。
{"title":"Equalizing access: How robotics and AI can transform surgical care worldwide","authors":"Marta Weber,&nbsp;Kee B. Park,&nbsp;Salim Afshar","doi":"10.1126/scirobotics.adt6471","DOIUrl":"10.1126/scirobotics.adt6471","url":null,"abstract":"<div >The integration of robotics and artificial intelligence holds promise for improving access to surgical care worldwide.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 104","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144646029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The robot will see you now: Foundation models are the path forward for autonomous robotic surgery 机器人现在会看到你:基础模型是自主机器人手术的前进道路。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-07-09 DOI: 10.1126/scirobotics.adt0684
Michael Yip
Foundation models in robotics are here to stay, but can surgical robotics keep up with their data-intense requirement?
机器人技术的基础模型将继续存在,但手术机器人能跟上他们对数据密集型的需求吗?
{"title":"The robot will see you now: Foundation models are the path forward for autonomous robotic surgery","authors":"Michael Yip","doi":"10.1126/scirobotics.adt0684","DOIUrl":"10.1126/scirobotics.adt0684","url":null,"abstract":"<div >Foundation models in robotics are here to stay, but can surgical robotics keep up with their data-intense requirement?</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 104","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144594441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SRT-H: A hierarchical framework for autonomous surgery via language-conditioned imitation learning SRT-H:基于语言条件模仿学习的自主手术分层框架
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-07-09 DOI: 10.1126/scirobotics.adt5254
Ji Woong (Brian) Kim, Juo-Tung Chen, Pascal Hansen, Lucy Xiaoyang Shi, Antony Goldenberg, Samuel Schmidgall, Paul Maria Scheikl, Anton Deguet, Brandon M. White, De Ru Tsai, Richard Jaepyeong Cha, Jeffrey Jopling, Chelsea Finn, Axel Krieger
Research on autonomous surgery has largely focused on simple task automation in controlled environments. However, real-world surgical applications demand dexterous manipulation over extended durations and robust generalization to the inherent variability of human tissue. These challenges remain difficult to address using existing logic-based or conventional end-to-end learning strategies. To address this gap, we propose a hierarchical framework for performing dexterous, long-horizon surgical steps. Our approach uses a high-level policy for task planning and a low-level policy for generating low-level trajectories. The high-level planner plans in language space, generating task-level or corrective instructions that guide the robot through the long-horizon steps and help recover from errors made by the low-level policy. We validated our framework through ex vivo experiments on cholecystectomy, a commonly practiced minimally invasive procedure, and conducted ablation studies to evaluate key components of the system. Our method achieves a 100% success rate across eight different ex vivo gallbladders, operating fully autonomously without human intervention. The hierarchical approach improved the policy’s ability to recover from suboptimal states that are inevitable in the highly dynamic environment of realistic surgical applications. This work demonstrates step-level autonomy in a surgical procedure, marking a milestone toward clinical deployment of autonomous surgical systems.
自主手术的研究主要集中在受控环境下的简单任务自动化。然而,现实世界的外科应用需要长时间的灵巧操作和对人体组织固有变异性的强大泛化。使用现有的基于逻辑或传统的端到端学习策略仍然难以解决这些挑战。为了解决这一差距,我们提出了一个层次结构框架来执行灵巧的、长期的外科手术步骤。我们的方法使用高级策略来规划任务,低级策略来生成低级轨迹。高级规划器在语言空间中进行规划,生成任务级或纠正指令,指导机器人完成长视距步骤,并帮助从低级策略所犯的错误中恢复过来。我们通过胆囊切除术(一种常用的微创手术)的离体实验验证了我们的框架,并进行了消融研究来评估系统的关键组成部分。我们的方法在8个不同的离体胆囊中实现了100%的成功率,完全自主地运行,无需人工干预。分层方法提高了策略从次优状态中恢复的能力,这在现实外科应用的高度动态环境中是不可避免的。这项工作展示了外科手术步骤级的自主性,标志着自主手术系统临床部署的里程碑。
{"title":"SRT-H: A hierarchical framework for autonomous surgery via language-conditioned imitation learning","authors":"Ji Woong (Brian) Kim, Juo-Tung Chen, Pascal Hansen, Lucy Xiaoyang Shi, Antony Goldenberg, Samuel Schmidgall, Paul Maria Scheikl, Anton Deguet, Brandon M. White, De Ru Tsai, Richard Jaepyeong Cha, Jeffrey Jopling, Chelsea Finn, Axel Krieger","doi":"10.1126/scirobotics.adt5254","DOIUrl":"https://doi.org/10.1126/scirobotics.adt5254","url":null,"abstract":"Research on autonomous surgery has largely focused on simple task automation in controlled environments. However, real-world surgical applications demand dexterous manipulation over extended durations and robust generalization to the inherent variability of human tissue. These challenges remain difficult to address using existing logic-based or conventional end-to-end learning strategies. To address this gap, we propose a hierarchical framework for performing dexterous, long-horizon surgical steps. Our approach uses a high-level policy for task planning and a low-level policy for generating low-level trajectories. The high-level planner plans in language space, generating task-level or corrective instructions that guide the robot through the long-horizon steps and help recover from errors made by the low-level policy. We validated our framework through ex vivo experiments on cholecystectomy, a commonly practiced minimally invasive procedure, and conducted ablation studies to evaluate key components of the system. Our method achieves a 100% success rate across eight different ex vivo gallbladders, operating fully autonomously without human intervention. The hierarchical approach improved the policy’s ability to recover from suboptimal states that are inevitable in the highly dynamic environment of realistic surgical applications. This work demonstrates step-level autonomy in a surgical procedure, marking a milestone toward clinical deployment of autonomous surgical systems.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"89 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144586905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical needles in the hands of AI: Advancing toward autonomous robotic navigation 人工智能手中的医用针头:向自主机器人导航迈进
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-07-09 DOI: 10.1126/scirobotics.adt1874
Ron Alterovitz, Janine Hoelscher, Alan Kuntz
Safely and accurately navigating needles percutaneously or endoscopically to sites deep within the body is essential for many medical procedures, from biopsies to localized drug deliveries to tumor ablations. The advent of image guidance decades ago gave physicians information about the patient’s anatomy. We are now entering the era of AI (artificial intelligence) guidance, where AI can automatically analyze images, identify targets and obstacles, compute safe trajectories, and autonomously navigate a needle to a site with unprecedented accuracy and precision. We survey recent advances in the building blocks of AI guidance for medical needle deployment robots (perceiving anatomy, planning motions, perceiving instrument state, and performing motions) and discuss research opportunities to maximize the benefits of AI guidance for patient care.
安全、准确地引导针头经皮或内窥镜到达身体深处的部位,对于许多医疗程序都是必不可少的,从活组织检查到局部药物输送到肿瘤消融。几十年前图像引导的出现为医生提供了有关患者解剖结构的信息。我们现在正在进入人工智能制导时代,人工智能可以自动分析图像,识别目标和障碍物,计算安全轨迹,并以前所未有的精度和精度自动将指针导航到一个地点。我们调查了人工智能指导医疗针头部署机器人(感知解剖、规划运动、感知器械状态和执行运动)的构建模块的最新进展,并讨论了研究机会,以最大限度地提高人工智能指导对患者护理的好处。
{"title":"Medical needles in the hands of AI: Advancing toward autonomous robotic navigation","authors":"Ron Alterovitz, Janine Hoelscher, Alan Kuntz","doi":"10.1126/scirobotics.adt1874","DOIUrl":"https://doi.org/10.1126/scirobotics.adt1874","url":null,"abstract":"Safely and accurately navigating needles percutaneously or endoscopically to sites deep within the body is essential for many medical procedures, from biopsies to localized drug deliveries to tumor ablations. The advent of image guidance decades ago gave physicians information about the patient’s anatomy. We are now entering the era of AI (artificial intelligence) guidance, where AI can automatically analyze images, identify targets and obstacles, compute safe trajectories, and autonomously navigate a needle to a site with unprecedented accuracy and precision. We survey recent advances in the building blocks of AI guidance for medical needle deployment robots (perceiving anatomy, planning motions, perceiving instrument state, and performing motions) and discuss research opportunities to maximize the benefits of AI guidance for patient care.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144586904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forces for free: Vision-based contact force estimation with a compliant hand 自由力:基于视觉的接触力估计与一个柔顺的手
IF 26.1 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-06-25 DOI: 10.1126/scirobotics.adq5046
Yifan Zhu, Mei Hao, Xupeng Zhu, Quentin Bateux, Alex Wong, Aaron M. Dollar
Force-sensing capabilities are essential for robot manipulation systems. However, commonly used wrist-mounted force/torque sensors are heavy, fragile, and expensive, and tactile sensors require adding fragile circuitry to the robot fingers while only providing force information local to the contact. Here, we present a vision-based contact force estimator that serves as a more cost-effective and easier-to-implement alternative to existing force sensors by leveraging deformations of compliant hands upon contacts when compliant hands are in use. Our approach uses an estimator that visually observes a specialized compliant robot hand (available open source with easy fabrication through 3D printing) and predicts the contact force on the basis of its elastic deformation upon external forces. Because using wrist-mounted cameras to observe the gripper is common for robot manipulation systems, our method can obtain additional force information provided that the gripper is compliant. We optimized our compliant hand to minimize friction and avoid singularities in finger configurations, and we introduced memory to the estimator to combat the partial observability of the contact forces from the remaining friction and hysteresis. In addition, the estimator was made robust to background distractions and finger occlusions using vision foundation models to segment out the fingers. Although it is less accurate and slower than commercial force/torque sensors, we experimentally demonstrated the accuracy and robustness of our estimator (achieving between 0.2 newton and 0.4 newton error) and its utility during a variety of manipulation tasks using the gripper in the presence of noisy backgrounds and occlusions.
力感应能力是机器人操作系统必不可少的。然而,常用的腕式力/扭矩传感器笨重、易碎且昂贵,触觉传感器需要在机器人手指上添加易碎的电路,而只能提供触点局部的力信息。在这里,我们提出了一种基于视觉的接触力估计器,它可以作为一种更具成本效益和更容易实现的替代现有的力传感器,通过利用柔顺手在使用柔顺手时接触时的变形。我们的方法使用了一个估计器,它可以直观地观察一个专门的柔性机械手(通过3D打印易于制造的开源工具),并根据其在外力作用下的弹性变形来预测接触力。由于在机器人操作系统中使用腕上摄像机来观察夹持器是常见的,因此我们的方法可以在夹持器是柔性的情况下获得额外的力信息。我们优化了我们的柔顺手,以减少摩擦和避免手指结构中的奇异性,我们在估计器中引入了内存,以对抗剩余摩擦和滞后引起的接触力的部分可观察性。此外,利用视觉基础模型分割出手指,使估计器对背景干扰和手指遮挡具有鲁棒性。虽然它比商用力/扭矩传感器精度低,速度慢,但我们通过实验证明了我们的估计器的准确性和鲁棒性(实现0.2牛顿到0.4牛顿的误差),以及它在各种操作任务中使用夹持器在有噪声背景和遮挡的情况下的实用性。
{"title":"Forces for free: Vision-based contact force estimation with a compliant hand","authors":"Yifan Zhu,&nbsp;Mei Hao,&nbsp;Xupeng Zhu,&nbsp;Quentin Bateux,&nbsp;Alex Wong,&nbsp;Aaron M. Dollar","doi":"10.1126/scirobotics.adq5046","DOIUrl":"10.1126/scirobotics.adq5046","url":null,"abstract":"<div >Force-sensing capabilities are essential for robot manipulation systems. However, commonly used wrist-mounted force/torque sensors are heavy, fragile, and expensive, and tactile sensors require adding fragile circuitry to the robot fingers while only providing force information local to the contact. Here, we present a vision-based contact force estimator that serves as a more cost-effective and easier-to-implement alternative to existing force sensors by leveraging deformations of compliant hands upon contacts when compliant hands are in use. Our approach uses an estimator that visually observes a specialized compliant robot hand (available open source with easy fabrication through 3D printing) and predicts the contact force on the basis of its elastic deformation upon external forces. Because using wrist-mounted cameras to observe the gripper is common for robot manipulation systems, our method can obtain additional force information provided that the gripper is compliant. We optimized our compliant hand to minimize friction and avoid singularities in finger configurations, and we introduced memory to the estimator to combat the partial observability of the contact forces from the remaining friction and hysteresis. In addition, the estimator was made robust to background distractions and finger occlusions using vision foundation models to segment out the fingers. Although it is less accurate and slower than commercial force/torque sensors, we experimentally demonstrated the accuracy and robustness of our estimator (achieving between 0.2 newton and 0.4 newton error) and its utility during a variety of manipulation tasks using the gripper in the presence of noisy backgrounds and occlusions.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 103","pages":""},"PeriodicalIF":26.1,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpenExo: An open-source modular exoskeleton to augment human function OpenExo:开源模块化外骨骼,增强人体功能
IF 26.1 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-06-25 DOI: 10.1126/scirobotics.adt1591
Jack R. Williams, Chance F. Cuddeback, Shanpu Fang, Daniel Colley, Noah Enlow, Payton Cox, Paul Pridham, Zachary F. Lerner
Although the field of wearable robotic exoskeletons is rapidly expanding, there are several barriers to entry that discourage many from pursuing research in this area, ultimately hindering growth. Chief among these is the lengthy and costly development process to get an exoskeleton from conception to implementation and the necessity for a broad set of expertise. In addition, many exoskeletons are designed for a specific utility and are confined to the laboratory environment, limiting the flexibility of the designed system to adapt to answer new questions and explore new domains. To address these barriers, we present OpenExo, an open-source modular untethered exoskeleton framework that provides access to all aspects of the design process, including software, electronics, hardware, and control schemes. To demonstrate the utility of this exoskeleton framework, we performed benchtop and experimental validation testing with the system across multiple configurations, including hip-only incline assistance, ankle-only indoor and outdoor assistance, hip-and-ankle load carriage assistance, and elbow-only weightlifting assistance. All aspects of the software architecture, electrical components, hip and Bowden-cable transmission designs, and control schemes are freely available for other researchers to access, use, and modify when looking to address research questions in the field of wearable exoskeletons. Our hope is that OpenExo will accelerate the development and testing of new exoskeleton designs and control schemes while simultaneously encouraging others, including those who would have been turned away from entering the field, to explore new and unique research questions.
尽管可穿戴机器人外骨骼领域正在迅速扩张,但仍有一些进入壁垒阻碍了许多人在这一领域进行研究,最终阻碍了发展。其中最主要的是外骨骼从概念到实现的漫长而昂贵的开发过程,以及广泛的专业知识的必要性。此外,许多外骨骼是为特定用途而设计的,并且局限于实验室环境,限制了设计系统的灵活性,以适应回答新问题和探索新领域。为了解决这些障碍,我们提出了OpenExo,这是一个开源的模块化无约束外骨骼框架,可以访问设计过程的各个方面,包括软件、电子、硬件和控制方案。为了证明该外骨骼框架的实用性,我们对该系统进行了多种配置的台式和实验验证测试,包括仅髋部倾斜辅助、仅踝关节室内和室外辅助、髋部和踝关节负载运输辅助以及仅肘部举重辅助。软件架构、电气元件、髋关节和鲍登电缆传输设计以及控制方案的所有方面都可以免费提供给其他研究人员访问、使用和修改,以解决可穿戴外骨骼领域的研究问题。我们的希望是,OpenExo将加速新的外骨骼设计和控制方案的开发和测试,同时鼓励其他人,包括那些可能被拒之门外的人,探索新的和独特的研究问题。
{"title":"OpenExo: An open-source modular exoskeleton to augment human function","authors":"Jack R. Williams,&nbsp;Chance F. Cuddeback,&nbsp;Shanpu Fang,&nbsp;Daniel Colley,&nbsp;Noah Enlow,&nbsp;Payton Cox,&nbsp;Paul Pridham,&nbsp;Zachary F. Lerner","doi":"10.1126/scirobotics.adt1591","DOIUrl":"10.1126/scirobotics.adt1591","url":null,"abstract":"<div >Although the field of wearable robotic exoskeletons is rapidly expanding, there are several barriers to entry that discourage many from pursuing research in this area, ultimately hindering growth. Chief among these is the lengthy and costly development process to get an exoskeleton from conception to implementation and the necessity for a broad set of expertise. In addition, many exoskeletons are designed for a specific utility and are confined to the laboratory environment, limiting the flexibility of the designed system to adapt to answer new questions and explore new domains. To address these barriers, we present OpenExo, an open-source modular untethered exoskeleton framework that provides access to all aspects of the design process, including software, electronics, hardware, and control schemes. To demonstrate the utility of this exoskeleton framework, we performed benchtop and experimental validation testing with the system across multiple configurations, including hip-only incline assistance, ankle-only indoor and outdoor assistance, hip-and-ankle load carriage assistance, and elbow-only weightlifting assistance. All aspects of the software architecture, electrical components, hip and Bowden-cable transmission designs, and control schemes are freely available for other researchers to access, use, and modify when looking to address research questions in the field of wearable exoskeletons. Our hope is that OpenExo will accelerate the development and testing of new exoskeleton designs and control schemes while simultaneously encouraging others, including those who would have been turned away from entering the field, to explore new and unique research questions.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 103","pages":""},"PeriodicalIF":26.1,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adt1591","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The greatest challenge for prosthetics may be social, not neural, connections 假肢最大的挑战可能是社会连接,而不是神经连接
IF 26.1 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-06-25 DOI: 10.1126/scirobotics.adz2721
Robin R. Murphy
Death of the Author: A Novel imagines the influence of an experimental exoskeleton on a disabled author and her family.
作者之死:一部小说想象了实验性外骨骼对一位残疾作家及其家人的影响。
{"title":"The greatest challenge for prosthetics may be social, not neural, connections","authors":"Robin R. Murphy","doi":"10.1126/scirobotics.adz2721","DOIUrl":"10.1126/scirobotics.adz2721","url":null,"abstract":"<div ><i>Death of the Author: A Novel</i> imagines the influence of an experimental exoskeleton on a disabled author and her family.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 103","pages":""},"PeriodicalIF":26.1,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Photocatalytic microrobots for treating bacterial infections deep within sinuses 用于治疗鼻窦深处细菌感染的光催化微型机器人
IF 26.1 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-06-25 DOI: 10.1126/scirobotics.adt0720
Haidong Yu, Xurui Liu, Yabin Zhang, Jie Shen, Xijun Liu, Shubo Liu, Xiangyu Wang, Bonan Sun, Huihui Du, Lin Xu, Bingsuo Zou, Jianning Ding, Qingsong Xu, Li Zhang, Ben Wang
Microrobotic techniques are promising for treating biofilm infections located deep within the human body. However, the presence of highly viscous pus presents a formidable biological barrier, severely restricting targeted and minimally invasive treatments. In addition, conventional antibacterial agents exhibit limited payload integration with microrobotic systems, further compromising therapeutic efficiency. In this study, we propose a photocatalytic microrobot through a magnetically guided, optical fiber–assisted therapeutic platform specifically designed to treat bacterial infections in deep mucosal cavities. The microrobots comprising copper (Cu) single atom–doped bismuth oxoiodide (BiOI), termed CBMRs, can be guided and tracked by real-time x-ray imaging. Under external magnetic actuation, the illuminated region from the magnetically guided optical fiber synchronously follows the CBMR swarm, enabling effective antibacterial action at targeted infection sites. Upon continuous visible-light irradiation, the resultant photothermal effect substantially reduces the viscosity of pus on inflamed mucosal tissues, enhancing the penetration capability of the CBMR swarm by more than threefold compared with baseline conditions. Concurrently, atomic-level design of CBMRs facilitates robust generation of reactive oxygen species, enabling efficient biofilm disruption and reductions in bacterial viability. We validated the effectiveness of this integrated optical fiber–assisted microrobotic platform in a rabbit sinusitis model in vivo, demonstrating its potential for clinically relevant infection therapy.
微型机器人技术有望用于治疗人体深处的生物膜感染。然而,高粘性脓液的存在是一个强大的生物屏障,严重限制了靶向和微创治疗。此外,传统抗菌剂与微型机器人系统的有效载荷集成有限,进一步降低了治疗效率。在这项研究中,我们提出了一种光催化微型机器人,通过磁引导,光纤辅助治疗平台,专门设计用于治疗深部粘膜腔细菌感染。这种由铜(Cu)单原子掺杂氧化碘化铋(BiOI)组成的微型机器人被称为CBMRs,可以通过实时x射线成像进行引导和跟踪。在外部磁驱动下,来自磁导光纤的照明区域同步跟随CBMR群,从而在目标感染部位实现有效的抗菌作用。在持续的可见光照射下,所产生的光热效应大大降低了发炎粘膜组织上脓液的粘度,与基线条件相比,CBMR群的穿透能力提高了三倍以上。同时,CBMRs的原子水平设计有助于活性氧的生成,从而有效地破坏生物膜并降低细菌的生存能力。我们在兔鼻窦炎模型中验证了这种集成光纤辅助微型机器人平台的有效性,证明了其在临床相关感染治疗中的潜力。
{"title":"Photocatalytic microrobots for treating bacterial infections deep within sinuses","authors":"Haidong Yu,&nbsp;Xurui Liu,&nbsp;Yabin Zhang,&nbsp;Jie Shen,&nbsp;Xijun Liu,&nbsp;Shubo Liu,&nbsp;Xiangyu Wang,&nbsp;Bonan Sun,&nbsp;Huihui Du,&nbsp;Lin Xu,&nbsp;Bingsuo Zou,&nbsp;Jianning Ding,&nbsp;Qingsong Xu,&nbsp;Li Zhang,&nbsp;Ben Wang","doi":"10.1126/scirobotics.adt0720","DOIUrl":"10.1126/scirobotics.adt0720","url":null,"abstract":"<div >Microrobotic techniques are promising for treating biofilm infections located deep within the human body. However, the presence of highly viscous pus presents a formidable biological barrier, severely restricting targeted and minimally invasive treatments. In addition, conventional antibacterial agents exhibit limited payload integration with microrobotic systems, further compromising therapeutic efficiency. In this study, we propose a photocatalytic microrobot through a magnetically guided, optical fiber–assisted therapeutic platform specifically designed to treat bacterial infections in deep mucosal cavities. The microrobots comprising copper (Cu) single atom–doped bismuth oxoiodide (BiOI), termed CBMRs, can be guided and tracked by real-time x-ray imaging. Under external magnetic actuation, the illuminated region from the magnetically guided optical fiber synchronously follows the CBMR swarm, enabling effective antibacterial action at targeted infection sites. Upon continuous visible-light irradiation, the resultant photothermal effect substantially reduces the viscosity of pus on inflamed mucosal tissues, enhancing the penetration capability of the CBMR swarm by more than threefold compared with baseline conditions. Concurrently, atomic-level design of CBMRs facilitates robust generation of reactive oxygen species, enabling efficient biofilm disruption and reductions in bacterial viability. We validated the effectiveness of this integrated optical fiber–assisted microrobotic platform in a rabbit sinusitis model in vivo, demonstrating its potential for clinically relevant infection therapy.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 103","pages":""},"PeriodicalIF":26.1,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1