A Macro-Micro Vision Integrated Micromanipulation System for Self-Initialization and Resilient Control

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Automation Science and Engineering Pub Date : 2025-01-20 DOI:10.1109/TASE.2025.3532214
Tiexin Wang;Yun Long;Tianle Weng;Liangjing Yang
{"title":"A Macro-Micro Vision Integrated Micromanipulation System for Self-Initialization and Resilient Control","authors":"Tiexin Wang;Yun Long;Tianle Weng;Liangjing Yang","doi":"10.1109/TASE.2025.3532214","DOIUrl":null,"url":null,"abstract":"Robotic micromanipulation systems (RMS) enable precise and repeatable operations under a microscope. Traditional RMS rely solely on microscopic visual feedback, necessitating time-consuming manual positioning to bring the tool tip within the microscope field-of-view (Micro-FOV), which limits efficiency and heavily depends on operator skill. This paper proposes an innovative RMS that integrates macro and micro vision to automate the aforementioned tool tip positioning and facilitate resilient control. The system utilizes an external camera to obtain the macro field-of-view (Macro-FOV), containing the tool and fiducial markers, and estimates the tool tip’s 3D position by triangulation. Visual servoing is then used to guide the tool tip towards the Micro-FOV. Under the Micro-FOV, a tool-sweep detector based on partitioned difference images is used to sequentially locate the tool’s shaft and tip. After auto-focusing, the system executes tool tip and Petri dish resilient control based on our developed self-calibration and self-recalibration mechanisms. During the operation, the system provides an intuitive user interface that includes both macro and micro information, improving the visualization and productivity of micromanipulation. Experiments show that the self-initialization scheme can be implemented across different macro camera viewpoints, reducing the average tip positioning time from 65.70 to 50.08 seconds compared to manual operation, thereby decreasing manual labor intensity and improving efficiency. The self-recalibration mechanism achieves precision and resilient control, with an average error of <inline-formula> <tex-math>$0.95~\\mu $ </tex-math></inline-formula>m over 25 continuous trials. Additionally, the system exhibits robustness against vibration and visual interference, underscoring its potential for diverse biomedical applications.Note to Practitioners—In the biomedical field, robotic micromanipulation systems (RMS) are valued for their high precision and repeatability. Existing research on RMS typically focuses on achieving varying degrees of automation using visual feedback from the microscope, with the assumption that the tool tip is already within the Micro-FOV prior to the task. However, the initial step of moving the tool tip from the Macro-FOV to the Micro-FOV and eventually bringing it to focus usually requires time-consuming manual operation that highly depends on the skill of individual operators. To achieve automatic positioning and resilient control of the tool tip, this paper presents an RMS that integrates macro and micro vision. The system combines visual feedback from a macro camera and a microscope to complete the self-initialization of the system in three steps: macro tip positioning, micro tip positioning, and self-calibration. After self-initialization, the system provides the operator with an intuitive cursor-based user interface. During operation, the system uses visual feedback to detect the tip position and ensures the control accuracy through a self-recalibration mechanism. The proposed self-initialization method has the potential for a wide range of applications. It can be extended to micromanipulation systems with different types of end-effectors, thereby improving the efficiency and precision of micromanipulation.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"11250-11263"},"PeriodicalIF":6.4000,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10847922","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10847922/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Robotic micromanipulation systems (RMS) enable precise and repeatable operations under a microscope. Traditional RMS rely solely on microscopic visual feedback, necessitating time-consuming manual positioning to bring the tool tip within the microscope field-of-view (Micro-FOV), which limits efficiency and heavily depends on operator skill. This paper proposes an innovative RMS that integrates macro and micro vision to automate the aforementioned tool tip positioning and facilitate resilient control. The system utilizes an external camera to obtain the macro field-of-view (Macro-FOV), containing the tool and fiducial markers, and estimates the tool tip’s 3D position by triangulation. Visual servoing is then used to guide the tool tip towards the Micro-FOV. Under the Micro-FOV, a tool-sweep detector based on partitioned difference images is used to sequentially locate the tool’s shaft and tip. After auto-focusing, the system executes tool tip and Petri dish resilient control based on our developed self-calibration and self-recalibration mechanisms. During the operation, the system provides an intuitive user interface that includes both macro and micro information, improving the visualization and productivity of micromanipulation. Experiments show that the self-initialization scheme can be implemented across different macro camera viewpoints, reducing the average tip positioning time from 65.70 to 50.08 seconds compared to manual operation, thereby decreasing manual labor intensity and improving efficiency. The self-recalibration mechanism achieves precision and resilient control, with an average error of $0.95~\mu $ m over 25 continuous trials. Additionally, the system exhibits robustness against vibration and visual interference, underscoring its potential for diverse biomedical applications.Note to Practitioners—In the biomedical field, robotic micromanipulation systems (RMS) are valued for their high precision and repeatability. Existing research on RMS typically focuses on achieving varying degrees of automation using visual feedback from the microscope, with the assumption that the tool tip is already within the Micro-FOV prior to the task. However, the initial step of moving the tool tip from the Macro-FOV to the Micro-FOV and eventually bringing it to focus usually requires time-consuming manual operation that highly depends on the skill of individual operators. To achieve automatic positioning and resilient control of the tool tip, this paper presents an RMS that integrates macro and micro vision. The system combines visual feedback from a macro camera and a microscope to complete the self-initialization of the system in three steps: macro tip positioning, micro tip positioning, and self-calibration. After self-initialization, the system provides the operator with an intuitive cursor-based user interface. During operation, the system uses visual feedback to detect the tip position and ensures the control accuracy through a self-recalibration mechanism. The proposed self-initialization method has the potential for a wide range of applications. It can be extended to micromanipulation systems with different types of end-effectors, thereby improving the efficiency and precision of micromanipulation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
自初始化与弹性控制的宏-微视觉集成微操作系统
机器人显微操作系统(RMS)可以在显微镜下进行精确和可重复的操作。传统的RMS仅依赖于显微镜视觉反馈,需要花费大量时间手动定位,将工具尖端置于显微镜视场(Micro-FOV)内,这限制了效率,并且严重依赖于操作人员的技能。本文提出了一种集成宏观和微观视觉的创新RMS,以实现上述刀尖定位的自动化并促进弹性控制。该系统利用外部摄像头获取包含工具和基准标记的宏观视场(macro - fov),并通过三角测量估计工具尖端的3D位置。然后使用视觉伺服来引导工具尖端朝向微视场。在Micro-FOV下,基于分割差图像的刀具扫描检测器对刀具轴和刀尖进行顺序定位。在自动对焦后,系统基于我们开发的自校准和自重新校准机制执行刀尖和培养皿弹性控制。在操作过程中,系统提供了一个直观的用户界面,包括宏观和微观信息,提高了微观操作的可视化和生产率。实验表明,该自初始化方案可以在不同的微距摄像机视点上实现,与人工操作相比,将平均针尖定位时间从65.70秒减少到50.08秒,从而降低了人工劳动强度,提高了效率。自校准机构实现了高精度和弹性控制,在连续25次试验中平均误差为0.95~\mu $ m。此外,该系统具有抗振动和视觉干扰的稳健性,强调了其在各种生物医学应用中的潜力。从业者注意:在生物医学领域,机器人微操作系统(RMS)因其高精度和可重复性而受到重视。现有的RMS研究通常侧重于使用显微镜的视觉反馈来实现不同程度的自动化,并假设工具尖端在任务之前已经在Micro-FOV内。然而,将工具尖从宏观视场移动到微观视场并最终使其聚焦的最初步骤通常需要耗时的手动操作,这在很大程度上取决于个人操作人员的技能。为了实现刀尖的自动定位和弹性控制,本文提出了一种宏、微观视觉相结合的RMS。该系统结合微距相机和显微镜的视觉反馈,分微距针尖定位、微针尖定位和自校准三步完成系统的自初始化。自初始化后,系统为操作员提供直观的基于光标的用户界面。在操作过程中,系统采用视觉反馈检测尖端位置,并通过自校准机构确保控制精度。所提出的自初始化方法具有广泛应用的潜力。它可以扩展到具有不同类型末端执行器的微操作系统,从而提高微操作的效率和精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
期刊最新文献
Nonsingular Generalized Adjustable Predefined-Time Sliding Mode Controllers with Adaptive Predefined-Time Observers for Nonlinear Dynamical Systems Haptic-Assisted Magnetic Navigation of Microswarm for Targeted Delivery in Dynamic Fluidic Environments Nonlinear Shape Control of Flexible Continuum Robots Using Offline-Online Learning with Neurodynamic Optimization A Digital Twin Approach for Last-Mile Delivery Distributed Predefined-Time Leader-Follower Formation Control for Heterogeneous Wheeled Mobile Robots
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1