首页 > 最新文献

Frontiers in Robotics and AI最新文献

英文 中文
Visuo-tactile feedback policies for terminal assembly facilitated by reinforcement learning. 基于强化学习的终端装配视觉触觉反馈策略。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-22 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1660244
Yuchao Li, Ziqi Jin, Jin Liu, Daolin Ma

Industrial terminal assembly tasks are often repetitive and involve handling components with tight tolerances that are susceptible to damage. Learning an effective terminal assembly policy in real-world is challenging, as collisions between parts and the environment can lead to slippage or part breakage. In this paper, we propose a safe reinforcement learning approach to develop a visuo-tactile assembly policy that is robust to variations in grasp poses. Our method minimizes collisions between the terminal head and terminal base by decomposing the assembly task into three distinct phases. In the first grasp phase,a vision-guided model is trained to pick the terminal head from an initial bin. In the second align phase, a tactile-based grasp pose estimation model is employed to align the terminal head with the terminal base. In the final assembly phase, a visuo-tactile policy is learned to precisely insert the terminal head into the terminal base. To ensure safe training, the robot leverages human demonstrations and interventions. Experimental results on PLC terminal assembly demonstrate that the proposed method achieves 100% successful insertions across 100 different initial end-effector and grasp poses, while imitation learning and online-RL policy yield only 9% and 0%.

工业终端组装任务通常是重复的,并且涉及处理具有严格公差的组件,这些组件容易损坏。在现实世界中,学习一个有效的终端装配策略是具有挑战性的,因为部件与环境之间的碰撞可能导致滑动或部件损坏。在本文中,我们提出了一种安全的强化学习方法来开发视觉-触觉组合策略,该策略对抓取姿势的变化具有鲁棒性。通过将装配任务分解为三个不同的阶段,我们的方法最大限度地减少了终端头和终端基座之间的碰撞。在第一个抓取阶段,训练视觉引导模型从初始容器中挑选终端头部。在第二对准阶段,采用基于触觉的抓取姿态估计模型对终端头部和终端基座进行对准。在最后的装配阶段,学习了一种视觉触觉策略来精确地将端子头插入端子基座。为了确保安全训练,机器人利用人类演示和干预。在PLC终端装配上的实验结果表明,该方法在100个不同初始末端执行器和抓取姿势之间的插入成功率为100%,而模仿学习和在线rl策略的成功率仅为9%和0%。
{"title":"Visuo-tactile feedback policies for terminal assembly facilitated by reinforcement learning.","authors":"Yuchao Li, Ziqi Jin, Jin Liu, Daolin Ma","doi":"10.3389/frobt.2025.1660244","DOIUrl":"10.3389/frobt.2025.1660244","url":null,"abstract":"<p><p>Industrial terminal assembly tasks are often repetitive and involve handling components with tight tolerances that are susceptible to damage. Learning an effective terminal assembly policy in real-world is challenging, as collisions between parts and the environment can lead to slippage or part breakage. In this paper, we propose a safe reinforcement learning approach to develop a visuo-tactile assembly policy that is robust to variations in grasp poses. Our method minimizes collisions between the terminal head and terminal base by decomposing the assembly task into three distinct phases. In the first <i>grasp</i> phase,a vision-guided model is trained to pick the terminal head from an initial bin. In the second <i>align</i> phase, a tactile-based grasp pose estimation model is employed to align the terminal head with the terminal base. In the final <i>assembly</i> phase, a visuo-tactile policy is learned to precisely insert the terminal head into the terminal base. To ensure safe training, the robot leverages human demonstrations and interventions. Experimental results on PLC terminal assembly demonstrate that the proposed method achieves 100% successful insertions across 100 different initial end-effector and grasp poses, while imitation learning and online-RL policy yield only 9% and 0%.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1660244"},"PeriodicalIF":3.0,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12586048/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145460310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time open-vocabulary perception for mobile robots on edge devices: a systematic analysis of the accuracy-latency trade-off. 边缘设备上移动机器人的实时开放词汇感知:准确性-延迟权衡的系统分析。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-21 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1693988
Jongyoon Park, Pileun Kim, Daeil Ko

The integration of Vision-Language Models (VLMs) into autonomous systems is of growing importance for improving Human-Robot Interaction (HRI), enabling robots to operate within complex and unstructured environments and collaborate with non-expert users. For mobile robots to be effectively deployed in dynamic settings such as domestic or industrial areas, the ability to interpret and execute natural language commands is crucial. However, while VLMs offer powerful zero-shot, open-vocabulary recognition capabilities, their high computational cost presents a significant challenge for real-time performance on resource-constrained edge devices. This study provides a systematic analysis of the trade-offs involved in optimizing a real-time robotic perception pipeline on the NVIDIA Jetson AGX Orin 64GB platform. We investigate the relationship between accuracy and latency by evaluating combinations of two open-vocabulary detection models and two prompt-based segmentation models. Each pipeline is optimized using various precision levels (FP32, FP16, and Best) via NVIDIA TensorRT. We present a quantitative comparison of the mean Intersection over Union (mIoU) and latency for each configuration, offering practical insights and benchmarks for researchers and developers deploying these advanced models on embedded systems.

将视觉语言模型(VLMs)集成到自主系统中对于改善人机交互(HRI)越来越重要,使机器人能够在复杂和非结构化的环境中操作,并与非专业用户协作。为了使移动机器人有效地部署在家庭或工业领域等动态环境中,解释和执行自然语言命令的能力至关重要。然而,尽管vlm提供了强大的零射击、开放词汇表识别能力,但其高昂的计算成本对资源受限边缘设备的实时性能提出了重大挑战。本研究系统分析了在NVIDIA Jetson AGX Orin 64GB平台上优化实时机器人感知管道所涉及的权衡。我们通过评估两种开放词汇检测模型和两种基于提示的分割模型的组合来研究准确率和延迟之间的关系。每个管道都通过NVIDIA TensorRT使用不同的精度级别(FP32, FP16和Best)进行优化。我们对每种配置的平均交联(mIoU)和延迟进行了定量比较,为研究人员和开发人员在嵌入式系统上部署这些先进模型提供了实用的见解和基准。
{"title":"Real-time open-vocabulary perception for mobile robots on edge devices: a systematic analysis of the accuracy-latency trade-off.","authors":"Jongyoon Park, Pileun Kim, Daeil Ko","doi":"10.3389/frobt.2025.1693988","DOIUrl":"https://doi.org/10.3389/frobt.2025.1693988","url":null,"abstract":"<p><p>The integration of Vision-Language Models (VLMs) into autonomous systems is of growing importance for improving Human-Robot Interaction (HRI), enabling robots to operate within complex and unstructured environments and collaborate with non-expert users. For mobile robots to be effectively deployed in dynamic settings such as domestic or industrial areas, the ability to interpret and execute natural language commands is crucial. However, while VLMs offer powerful zero-shot, open-vocabulary recognition capabilities, their high computational cost presents a significant challenge for real-time performance on resource-constrained edge devices. This study provides a systematic analysis of the trade-offs involved in optimizing a real-time robotic perception pipeline on the NVIDIA Jetson AGX Orin 64GB platform. We investigate the relationship between accuracy and latency by evaluating combinations of two open-vocabulary detection models and two prompt-based segmentation models. Each pipeline is optimized using various precision levels (FP32, FP16, and Best) via NVIDIA TensorRT. We present a quantitative comparison of the mean Intersection over Union (mIoU) and latency for each configuration, offering practical insights and benchmarks for researchers and developers deploying these advanced models on embedded systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1693988"},"PeriodicalIF":3.0,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12583037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145453636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards autonomous robot-assisted transcatheter heart valve implantation: in vivo teleoperation and phantom validation of AI-guided positioning. 自主机器人辅助经导管心脏瓣膜植入:人工智能引导定位的体内远程操作和幻影验证。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-21 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1650228
Jonas Smits, Pierre Schegg, Loic Wauters, Luc Perard, Corentin Languepin, Davide Recchia, Vera Damerjian Pieters, Stéphane Lopez, Didier Tchetche, Kendra Grubb, Jorgen Hansen, Eric Sejor, Pierre Berthet-Rayne

Transcatheter Aortic Valve Implantation (TAVI) is a minimally invasive procedure in which a transcatheter heart valve (THV) is implanted within the patient's diseased native aortic valve. The procedure is increasingly chosen even for intermediate-risk and younger patients, as it combines complication rates comparable to open-heart surgery with the advantage of being far less invasive. Despite its benefits, challenges remain in achieving accurate and repeatable valve positioning, with inaccuracies potentially leading to complications such as THV migration, coronary obstruction, and conduction disturbances (CD). The latter often requires a permanent pacemaker implantation as a costly and life-changing mitigation. Robotic assistance may offer solutions, enhancing precision, standardization, and reducing radiation exposure for clinicians. This article introduces a novel solution for robot-assisted TAVI, addressing the growing need for skilled clinicians and improving procedural outcomes. We present an in-vivo animal demonstration of robotic-assisted TAVI, showing feasibility of tele-operative instrument control and THV deployment. This, done at safer distances from radiation sources by a single operator. Furthermore, THV positioning and deployment under supervised autonomy is demonstrated on phantom, and shown to be feasible using both camera- and fluoroscopy-based imaging feedback and AI. Finally, an initial operator study probes performance and potential added value of various technology augmentations with respect to a manual expert operator, indicating equivalent to superior accuracy and repeatability using robotic assistance. It is concluded that robot-assisted TAVI is technically feasible in-vivo, and presents a strong case for a clinically meaningful application of level-3 autonomy. These findings support the potential of surgical robotic technology to enhance TAVI accuracy and repeatability, ultimately improving patient outcomes and expanding procedural accessibility.

经导管主动脉瓣植入术(TAVI)是一种微创手术,将经导管心脏瓣膜(THV)植入患者病变的原生主动脉瓣内。即使是中等风险和年轻患者也越来越多地选择这种手术,因为它的并发症发生率与开胸手术相当,而且侵入性小得多。尽管有诸多优点,但在实现准确和可重复的瓣膜定位方面仍然存在挑战,不准确可能导致THV迁移、冠状动脉阻塞和传导障碍(CD)等并发症。后者通常需要植入永久性起搏器,这是一种昂贵且改变生活的缓解措施。机器人辅助可以为临床医生提供解决方案,提高精度、标准化和减少辐射暴露。本文介绍了一种机器人辅助TAVI的新解决方案,解决了对熟练临床医生日益增长的需求,并改善了手术结果。我们提出了机器人辅助TAVI的活体动物演示,展示了远程操作仪器控制和THV部署的可行性。这是由一个操作员在远离辐射源的安全距离下完成的。此外,在幻影上演示了在监督下自主定位和部署THV,并证明了使用基于摄像头和荧光镜的成像反馈和人工智能是可行的。最后,一项初步的操作员研究探讨了与人工专家操作员相比,各种技术增强的性能和潜在的附加价值,表明使用机器人辅助相当于更高的精度和可重复性。结论是,机器人辅助的TAVI在体内技术上是可行的,并且为临床上有意义的三级自主应用提供了强有力的案例。这些发现支持了手术机器人技术在提高TAVI准确性和可重复性方面的潜力,最终改善了患者的预后,扩大了手术的可及性。
{"title":"Towards autonomous robot-assisted transcatheter heart valve implantation: in vivo teleoperation and phantom validation of AI-guided positioning.","authors":"Jonas Smits, Pierre Schegg, Loic Wauters, Luc Perard, Corentin Languepin, Davide Recchia, Vera Damerjian Pieters, Stéphane Lopez, Didier Tchetche, Kendra Grubb, Jorgen Hansen, Eric Sejor, Pierre Berthet-Rayne","doi":"10.3389/frobt.2025.1650228","DOIUrl":"10.3389/frobt.2025.1650228","url":null,"abstract":"<p><p>Transcatheter Aortic Valve Implantation (TAVI) is a minimally invasive procedure in which a transcatheter heart valve (THV) is implanted within the patient's diseased native aortic valve. The procedure is increasingly chosen even for intermediate-risk and younger patients, as it combines complication rates comparable to open-heart surgery with the advantage of being far less invasive. Despite its benefits, challenges remain in achieving accurate and repeatable valve positioning, with inaccuracies potentially leading to complications such as THV migration, coronary obstruction, and conduction disturbances (CD). The latter often requires a permanent pacemaker implantation as a costly and life-changing mitigation. Robotic assistance may offer solutions, enhancing precision, standardization, and reducing radiation exposure for clinicians. This article introduces a novel solution for robot-assisted TAVI, addressing the growing need for skilled clinicians and improving procedural outcomes. We present an <i>in-vivo</i> animal demonstration of robotic-assisted TAVI, showing feasibility of tele-operative instrument control and THV deployment. This, done at safer distances from radiation sources by a single operator. Furthermore, THV positioning and deployment under supervised autonomy is demonstrated on phantom, and shown to be feasible using both camera- and fluoroscopy-based imaging feedback and AI. Finally, an initial operator study probes performance and potential added value of various technology augmentations with respect to a manual expert operator, indicating equivalent to superior accuracy and repeatability using robotic assistance. It is concluded that robot-assisted TAVI is technically feasible <i>in-vivo</i>, and presents a strong case for a clinically meaningful application of level-3 autonomy. These findings support the potential of surgical robotic technology to enhance TAVI accuracy and repeatability, ultimately improving patient outcomes and expanding procedural accessibility.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1650228"},"PeriodicalIF":3.0,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12583050/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145453642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FROG: a new people detection dataset for knee-high 2D range finders. FROG:一种用于膝高2D测距仪的新型人物检测数据集。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-20 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1671673
Fernando Amodeo, Noé Pérez-Higueras, Luis Merino, Fernando Caballero

Mobile robots require knowledge of the environment, especially of humans located in its vicinity. While the most common approaches for detecting humans involve computer vision, an often overlooked hardware feature of robots for people detection are their 2D range finders. These were originally intended for obstacle avoidance and mapping/SLAM tasks. In most robots, they are conveniently located at a height approximately between the ankle and the knee, so they can be used for detecting people too, and with a larger field of view and depth resolution compared to cameras. In this paper, we present a new dataset for people detection using knee-high 2D range finders called FROG. This dataset has greater laser resolution, scanning frequency, and more complete annotation data compared to existing datasets such as DROW (Beyer et al., 2018). Particularly, the FROG dataset contains annotations for 100% of its laser scans (unlike DROW which only annotates 5%), 17x more annotated scans, 100x more people annotations, and over twice the distance traveled by the robot. We propose a benchmark based on the FROG dataset, and analyze a collection of state-of-the-art people detectors based on 2D range finder data. We also propose and evaluate a new end-to-end deep learning approach for people detection. Our solution works with the raw sensor data directly (not needing hand-crafted input data features), thus avoiding CPU preprocessing and releasing the developer of understanding specific domain heuristics. Experimental results show how the proposed people detector attains results comparable to the state of the art, while an optimized implementation for ROS can operate at more than 500 Hz.

移动机器人需要了解环境,尤其是其附近的人类。虽然检测人类的最常见方法涉及计算机视觉,但机器人用于检测人类的一个经常被忽视的硬件功能是它们的2D测距仪。这些最初用于避障和映射/SLAM任务。在大多数机器人中,它们的高度大约在脚踝和膝盖之间,所以它们也可以用来探测人,而且与相机相比,它们具有更大的视野和深度分辨率。在本文中,我们提出了一种新的数据集,用于使用膝盖高的2D测距仪进行人员检测,称为FROG。与现有数据集(如DROW)相比,该数据集具有更高的激光分辨率、扫描频率和更完整的注释数据(Beyer et al., 2018)。特别是,FROG数据集包含100%激光扫描的注释(不像DROW只注释5%),17倍的注释扫描,100倍的人注释,以及超过两倍的机器人行进距离。我们提出了一个基于FROG数据集的基准,并基于2D测距仪数据分析了一组最先进的人体探测器。我们还提出并评估了一种新的端到端深度学习方法,用于人员检测。我们的解决方案直接使用原始传感器数据(不需要手工制作的输入数据特征),从而避免了CPU预处理,并释放了开发人员理解特定领域的启发式。实验结果表明,所提出的人检测器如何获得与当前技术水平相当的结果,而ROS的优化实现可以在500 Hz以上工作。
{"title":"FROG: a new people detection dataset for knee-high 2D range finders.","authors":"Fernando Amodeo, Noé Pérez-Higueras, Luis Merino, Fernando Caballero","doi":"10.3389/frobt.2025.1671673","DOIUrl":"10.3389/frobt.2025.1671673","url":null,"abstract":"<p><p>Mobile robots require knowledge of the environment, especially of humans located in its vicinity. While the most common approaches for detecting humans involve computer vision, an often overlooked hardware feature of robots for people detection are their 2D range finders. These were originally intended for obstacle avoidance and mapping/SLAM tasks. In most robots, they are conveniently located at a height approximately between the ankle and the knee, so they can be used for detecting people too, and with a larger field of view and depth resolution compared to cameras. In this paper, we present a new dataset for people detection using knee-high 2D range finders called FROG. This dataset has greater laser resolution, scanning frequency, and more complete annotation data compared to existing datasets such as DROW (Beyer et al., 2018). Particularly, the FROG dataset contains annotations for 100% of its laser scans (unlike DROW which only annotates 5%), 17x more annotated scans, 100x more people annotations, and over twice the distance traveled by the robot. We propose a benchmark based on the FROG dataset, and analyze a collection of state-of-the-art people detectors based on 2D range finder data. We also propose and evaluate a new end-to-end deep learning approach for people detection. Our solution works with the raw sensor data directly (not needing hand-crafted input data features), thus avoiding CPU preprocessing and releasing the developer of understanding specific domain heuristics. Experimental results show how the proposed people detector attains results comparable to the state of the art, while an optimized implementation for ROS can operate at more than 500 Hz.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1671673"},"PeriodicalIF":3.0,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12580528/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145446191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning methods for 3D tracking of fish in challenging underwater conditions for future perception in autonomous underwater vehicles. 在具有挑战性的水下条件下对鱼类进行3D跟踪的深度学习方法,用于自主水下航行器的未来感知。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-17 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1628213
Martin Føre, Emilia May O'Brien, Eleni Kelasidi

Due to their utility in replacing workers in tasks unsuitable for humans, unmanned underwater vehicles (UUVs) have become increasingly common tools in the fish farming industry. However, earlier studies and anecdotal evidence from farmers imply that farmed fish tend to move away from and avoid intrusive objects such as vehicles that are deployed and operated inside net pens. Such responses could imply a discomfort associated with the intrusive objects, which, in turn, can lead to stress and impaired welfare in the fish. To prevent this, vehicles and their control systems should be designed to automatically adjust operations when they perceive that they are repelling the fish. A necessary first step in this direction is to develop on-vehicle observation systems for assessing object/vehicle-fish distances in real-time settings that can provide inputs to the control algorithms. Due to their small size and low weight, modern cameras are ideal for this purpose. Moreover, the ongoing rapid developments within deep learning methods are enabling the use of increasingly sophisticated methods for analyzing footage from cameras. To explore this potential, we developed three new pipelines for the automated assessment of fish-camera distances in video and images. These methods were complemented using a recently published method, yielding four pipelines in total, namely, SegmentDepth, BBoxDepth, and SuperGlue that were based on stereo-vision and DepthAnything that was monocular. The overall performance was investigated using field data by comparing the fish-object distances obtained from the methods with those measured using a sonar. The four methods were then benchmarked by comparing the number of objects detected and the quality and overall accuracy of the stereo matches (only stereo-based methods). SegmentDepth, DepthAnything, and SuperGlue performed well in comparison with the sonar data, yielding mean absolute errors (MAE) of 0.205 m (95% CI: 0.050-0.360), 0.412 m (95% CI: 0.148-0.676), and 0.187 m (95% CI: 0.073-0.300), respectively, and were integrated into the Robot Operating System (ROS2) framework to enable real-time application in fish behavior identification and the control of robotic vehicles such as UUVs.

由于无人水下航行器(uuv)在代替工人从事不适合人类的工作方面的效用,它已成为养鱼业中越来越普遍的工具。然而,早期的研究和来自农民的轶事证据表明,养殖鱼类倾向于远离和避开侵入性物体,如部署和在渔网围栏内操作的车辆。这样的反应可能意味着与侵入性物体有关的不适,这反过来又会导致鱼的压力和福利受损。为了防止这种情况发生,车辆及其控制系统应该设计成当它们感知到它们正在驱赶鱼时自动调整操作。朝这个方向发展的第一步是开发车载观察系统,用于实时评估物体/车辆与鱼的距离,从而为控制算法提供输入。由于其体积小,重量轻,现代相机是理想的这一目的。此外,深度学习方法的持续快速发展使越来越复杂的方法能够用于分析来自摄像机的镜头。为了探索这一潜力,我们开发了三种新的管道,用于自动评估视频和图像中的鱼相机距离。这些方法与最近发布的方法相补充,总共产生了四个管道,即基于立体视觉的SegmentDepth、BBoxDepth和SuperGlue,以及基于单目的DepthAnything。通过将这些方法获得的鱼物距离与声纳测量的距离进行比较,利用现场数据调查了整体性能。然后通过比较检测到的物体数量和立体匹配的质量和整体精度(仅基于立体的方法)来对这四种方法进行基准测试。与声纳数据相比,SegmentDepth、DepthAnything和SuperGlue表现良好,平均绝对误差(MAE)分别为0.205 m (95% CI: 0.050-0.360)、0.412 m (95% CI: 0.148-0.676)和0.187 m (95% CI: 0.073-0.300),并被集成到机器人操作系统(ROS2)框架中,以实现实时应用于鱼类行为识别和机器人车辆(如uuv)的控制。
{"title":"Deep learning methods for 3D tracking of fish in challenging underwater conditions for future perception in autonomous underwater vehicles.","authors":"Martin Føre, Emilia May O'Brien, Eleni Kelasidi","doi":"10.3389/frobt.2025.1628213","DOIUrl":"10.3389/frobt.2025.1628213","url":null,"abstract":"<p><p>Due to their utility in replacing workers in tasks unsuitable for humans, unmanned underwater vehicles (UUVs) have become increasingly common tools in the fish farming industry. However, earlier studies and anecdotal evidence from farmers imply that farmed fish tend to move away from and avoid intrusive objects such as vehicles that are deployed and operated inside net pens. Such responses could imply a discomfort associated with the intrusive objects, which, in turn, can lead to stress and impaired welfare in the fish. To prevent this, vehicles and their control systems should be designed to automatically adjust operations when they perceive that they are repelling the fish. A necessary first step in this direction is to develop on-vehicle observation systems for assessing object/vehicle-fish distances in real-time settings that can provide inputs to the control algorithms. Due to their small size and low weight, modern cameras are ideal for this purpose. Moreover, the ongoing rapid developments within deep learning methods are enabling the use of increasingly sophisticated methods for analyzing footage from cameras. To explore this potential, we developed three new pipelines for the automated assessment of fish-camera distances in video and images. These methods were complemented using a recently published method, yielding four pipelines in total, namely, <i>SegmentDepth</i>, <i>BBoxDepth</i>, and <i>SuperGlue</i> that were based on stereo-vision and <i>DepthAnything</i> that was monocular. The overall performance was investigated using field data by comparing the fish-object distances obtained from the methods with those measured using a sonar. The four methods were then benchmarked by comparing the number of objects detected and the quality and overall accuracy of the stereo matches (only stereo-based methods). <i>SegmentDepth</i>, <i>DepthAnything</i>, and <i>SuperGlue</i> performed well in comparison with the sonar data, yielding mean absolute errors (MAE) of 0.205 m (95% CI: 0.050-0.360), 0.412 m (95% CI: 0.148-0.676), and 0.187 m (95% CI: 0.073-0.300), respectively, and were integrated into the Robot Operating System (ROS2) framework to enable real-time application in fish behavior identification and the control of robotic vehicles such as UUVs.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1628213"},"PeriodicalIF":3.0,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12575977/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling scalable inspection of offshore mooring systems using cost-effective autonomous underwater drones. 使用具有成本效益的自主水下无人机实现海上系泊系统的可扩展检测。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-16 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1655242
Dong Trong Nguyen, Christian Lindahl Elseth, Jakob Rude Øvstaas, Nikolai Arntzen, Geir Hamre, Dag-Børre Lillestøl

As aquaculture expands to meet global food demand, it remains dependent on manual, costly, infrequent, and high-risk operations due to reliance on high-end Remotely Operated Vehicles (ROVs). Scalable and autonomous systems are needed to enable safer and more efficient practices. This paper proposes a cost-effective autonomous inspection framework for the monitoring of mooring systems, a critical component ensuring structural integrity and regulatory compliance for both the aquaculture and floating offshore wind (FOW) sectors. The core contribution of this paper is a modular and scalable vision-based inspection pipeline built on the open-source Robot Operating System 2 (ROS 2) and implemented on a low-cost Blueye X3 underwater drone. The system integrates real-time image enhancement, YOLOv5-based object detection, and 4-DOF visual servoing for autonomous tracking of mooring lines. Additionally, the pipeline supports 3D reconstruction of the observed structure using tools such as ORB-SLAM3 and Meshroom, enabling future capabilities in change detection and defect identification. Validation results from simulation, dock and sea trials showed that the underwater drone can effective inspect of mooring system critical components with real-time processing on edge hardware. A cost estimation for the proposed approach showed a substantial reduction as compared with traditional ROV-based inspections. By increasing the Level of Autonomy (LoA) of off-the-shelf drones, this work provides (1) safer operations by replacing crew-dependent and costly operations that require a ROV and a mothership, (2) scalable monitoring and (3) regulatory-ready documentation. This offers a practical, cross-industry solution for sustainable offshore infrastructure management.

随着水产养殖规模的扩大以满足全球粮食需求,由于依赖高端远程操作车辆(rov),它仍然依赖于人工、昂贵、不频繁和高风险的操作。为了实现更安全、更有效的实践,需要可扩展和自主的系统。本文提出了一种具有成本效益的自主检测框架,用于监测系泊系统,这是确保水产养殖和浮动海上风电(FOW)部门结构完整性和法规遵从性的关键组成部分。本文的核心贡献是基于开源机器人操作系统2 (ROS 2)构建的模块化和可扩展的基于视觉的检测管道,并在低成本的Blueye X3水下无人机上实现。该系统集成了实时图像增强、基于yolov5的目标检测和用于自主跟踪系泊线的四自由度视觉伺服。此外,该管道还支持使用ORB-SLAM3和Meshroom等工具对观察到的结构进行3D重建,从而实现变更检测和缺陷识别的未来能力。仿真、船坞和海试验证结果表明,通过对边缘硬件的实时处理,水下无人潜航器可以有效地检测系泊系统的关键部件。与传统的基于rov的检查相比,该方法的成本估计大大降低。通过提高现成无人机的自主水平(LoA),这项工作提供了(1)通过取代需要ROV和母舰的依赖船员和昂贵的操作来提供更安全的操作,(2)可扩展的监控和(3)监管就绪的文件。这为可持续的海上基础设施管理提供了一种实用的跨行业解决方案。
{"title":"Enabling scalable inspection of offshore mooring systems using cost-effective autonomous underwater drones.","authors":"Dong Trong Nguyen, Christian Lindahl Elseth, Jakob Rude Øvstaas, Nikolai Arntzen, Geir Hamre, Dag-Børre Lillestøl","doi":"10.3389/frobt.2025.1655242","DOIUrl":"10.3389/frobt.2025.1655242","url":null,"abstract":"<p><p>As aquaculture expands to meet global food demand, it remains dependent on manual, costly, infrequent, and high-risk operations due to reliance on high-end Remotely Operated Vehicles (ROVs). Scalable and autonomous systems are needed to enable safer and more efficient practices. This paper proposes a cost-effective autonomous inspection framework for the monitoring of mooring systems, a critical component ensuring structural integrity and regulatory compliance for both the aquaculture and floating offshore wind (FOW) sectors. The core contribution of this paper is a modular and scalable vision-based inspection pipeline built on the open-source Robot Operating System 2 (ROS 2) and implemented on a low-cost Blueye X3 underwater drone. The system integrates real-time image enhancement, YOLOv5-based object detection, and 4-DOF visual servoing for autonomous tracking of mooring lines. Additionally, the pipeline supports 3D reconstruction of the observed structure using tools such as ORB-SLAM3 and Meshroom, enabling future capabilities in change detection and defect identification. Validation results from simulation, dock and sea trials showed that the underwater drone can effective inspect of mooring system critical components with real-time processing on edge hardware. A cost estimation for the proposed approach showed a substantial reduction as compared with traditional ROV-based inspections. By increasing the Level of Autonomy (LoA) of off-the-shelf drones, this work provides (1) safer operations by replacing crew-dependent and costly operations that require a ROV and a mothership, (2) scalable monitoring and (3) regulatory-ready documentation. This offers a practical, cross-industry solution for sustainable offshore infrastructure management.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1655242"},"PeriodicalIF":3.0,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12572656/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An in-situ participatory approach for assistive robots: methodology and implementation in a healthcare setting. 辅助机器人的现场参与式方法:在医疗保健环境中的方法和实施。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-16 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1648737
Ferran Gebellí, Raquel Ros

Introduction: This paper presents a participatory design approach for developing assistive robots, addressing the critical gap between designing robotic applications and real-world user needs. Traditional design methodologies often fail to capture authentic requirements due to users' limited familiarity with robotic technologies and the disconnection between design activities and actual deployment contexts.

Methods: We propose a methodology centred on iterative in-situ co-design, where stakeholders collaborate with researchers using functional low-fidelity prototypes within the actual environment of use. Our approach comprises three phases: observation and inspiration, in-situ co-design through prototyping, which is the core of the methodology, and longitudinal evaluation. We implemented this methodology over 10 months at an intermediate healthcare centre. The process involved healthcare staff in defining functionality, designing interactions, and refining system behaviour through hands-on experience with teleoperated prototypes.

Results: The resulting autonomous patrolling robot operated continuously across a two-month deployment. The evaluation through questionnaires on usability, usage and understanding of the robotic system, along with open-ended questions revealed diverse user adoption patterns, with five distinct personas emerging: enthusiastic high-adopter, disillusioned high-adopter, unconvinced mid-adopter, satisfied mid-adopter and non-adopter, which are discussed in detail.

Discussion: During the final evaluation deployment, user feedback still identified both new needs and practical improvements, as co-design iterations have the potential to continue indefinitely. Moreover, despite some performance issues, the robot's presence seemed to generate a placebo effect on both staff and patients, while it appears that staff's behaviours were also influenced by the regular observation of the researchers. The obtained results prove valuable insights into long-term human-robot interaction dynamics, highlighting the importance of context-based requirements gathering.

本文提出了一种开发辅助机器人的参与式设计方法,解决了机器人应用设计与现实世界用户需求之间的关键差距。由于用户对机器人技术的熟悉程度有限,以及设计活动与实际部署环境之间的脱节,传统的设计方法常常无法捕获真实的需求。方法:我们提出了一种以迭代原位协同设计为中心的方法,在这种方法中,利益相关者与研究人员在实际使用环境中使用功能性低保真原型进行合作。我们的方法包括三个阶段:观察和灵感,通过原型进行现场协同设计,这是方法的核心,以及纵向评估。我们在一家中级医疗保健中心实施了10个多月的方法。在这个过程中,医护人员需要定义功能、设计交互,并通过远程操作原型的实践经验来改进系统行为。结果:由此产生的自主巡逻机器人在两个月的部署中连续运行。通过对机器人系统的可用性、使用情况和理解程度的问卷调查以及开放式问题进行评估,揭示了不同的用户采用模式,并详细讨论了五个不同的角色:热情的高采用者、失望的高采用者、不确信的中等采用者、满意的中等采用者和非采用者。讨论:在最终的评估部署期间,用户反馈仍然确定了新的需求和实际的改进,因为协同设计迭代有可能无限期地继续下去。此外,尽管存在一些表现问题,但机器人的存在似乎对员工和患者都产生了安慰剂效应,而员工的行为似乎也受到了研究人员定期观察的影响。获得的结果证明了对长期人机交互动力学的有价值的见解,突出了基于上下文的需求收集的重要性。
{"title":"An <i>in-situ</i> participatory approach for assistive robots: methodology and implementation in a healthcare setting.","authors":"Ferran Gebellí, Raquel Ros","doi":"10.3389/frobt.2025.1648737","DOIUrl":"10.3389/frobt.2025.1648737","url":null,"abstract":"<p><strong>Introduction: </strong>This paper presents a participatory design approach for developing assistive robots, addressing the critical gap between designing robotic applications and real-world user needs. Traditional design methodologies often fail to capture authentic requirements due to users' limited familiarity with robotic technologies and the disconnection between design activities and actual deployment contexts.</p><p><strong>Methods: </strong>We propose a methodology centred on iterative <i>in-situ</i> co-design, where stakeholders collaborate with researchers using functional low-fidelity prototypes within the actual environment of use. Our approach comprises three phases: observation and inspiration, <i>in-situ</i> co-design through prototyping, which is the core of the methodology, and longitudinal evaluation. We implemented this methodology over 10 months at an intermediate healthcare centre. The process involved healthcare staff in defining functionality, designing interactions, and refining system behaviour through hands-on experience with teleoperated prototypes.</p><p><strong>Results: </strong>The resulting autonomous patrolling robot operated continuously across a two-month deployment. The evaluation through questionnaires on usability, usage and understanding of the robotic system, along with open-ended questions revealed diverse user adoption patterns, with five distinct personas emerging: enthusiastic high-adopter, disillusioned high-adopter, unconvinced mid-adopter, satisfied mid-adopter and non-adopter, which are discussed in detail.</p><p><strong>Discussion: </strong>During the final evaluation deployment, user feedback still identified both new needs and practical improvements, as co-design iterations have the potential to continue indefinitely. Moreover, despite some performance issues, the robot's presence seemed to generate a placebo effect on both staff and patients, while it appears that staff's behaviours were also influenced by the regular observation of the researchers. The obtained results prove valuable insights into long-term human-robot interaction dynamics, highlighting the importance of context-based requirements gathering.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1648737"},"PeriodicalIF":3.0,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12572612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Considerations for designing socially assistive robots for older adults. 设计老年人社交辅助机器人的考虑。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-16 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1622206
Samuel A Olatunji, Veronica Falcon, Anjali Ramesh, Wendy A Rogers

Social robots have the potential to support the health activities of older adults. However, they need to be designed for their specific needs; be accepted by and useful to them; and be integrated into their healthcare ecosystem and care network. We explored the research literature to determine the evidence base to guide design considerations necessary for socially assistive robots (SARs) for older adults in the context of healthcare. We identified various elements of the user-centered design of SARs to meet the needs of older adults within the constraints of a home environment. We emphasized the potential benefits of SARs in empowering older adults and supporting their autonomy for health applications. We identified research gaps and provided a road map for future development and deployment to enhance SAR functionality within digital health systems.

社交机器人有可能支持老年人的健康活动。然而,它们需要根据其特定需求进行设计;被他们接受并对他们有用;并融入他们的医疗生态系统和护理网络。我们研究了研究文献,以确定证据基础,以指导在医疗保健背景下老年人社会辅助机器人(SARs)的必要设计考虑。我们确定了以用户为中心的sar设计的各种元素,以满足家庭环境限制下老年人的需求。我们强调了SARs在增强老年人权能和支持其健康应用自主权方面的潜在益处。我们确定了研究差距,并为未来的开发和部署提供了路线图,以增强数字卫生系统中的SAR功能。
{"title":"Considerations for designing socially assistive robots for older adults.","authors":"Samuel A Olatunji, Veronica Falcon, Anjali Ramesh, Wendy A Rogers","doi":"10.3389/frobt.2025.1622206","DOIUrl":"10.3389/frobt.2025.1622206","url":null,"abstract":"<p><p>Social robots have the potential to support the health activities of older adults. However, they need to be designed for their specific needs; be accepted by and useful to them; and be integrated into their healthcare ecosystem and care network. We explored the research literature to determine the evidence base to guide design considerations necessary for socially assistive robots (SARs) for older adults in the context of healthcare. We identified various elements of the user-centered design of SARs to meet the needs of older adults within the constraints of a home environment. We emphasized the potential benefits of SARs in empowering older adults and supporting their autonomy for health applications. We identified research gaps and provided a road map for future development and deployment to enhance SAR functionality within digital health systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1622206"},"PeriodicalIF":3.0,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12571604/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145432789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Huggable integrated socially assistive robots: exploring the potential and challenges for sustainable use in long-term care contexts. 可拥抱的集成社交辅助机器人:探索长期护理环境中可持续使用的潜力和挑战。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-15 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1646353
B M Hofstede, S Ipakchian Askari, T R C van Hoesel, R H Cuijpers, L P de Witte, W A IJsselsteijn, H H Nap

With ageing populations and decreasing numbers of care personnel, care technologies such as socially assistive robots offer innovative solutions for healthcare workers and older adults, supporting ageing in place. Among others, SARs are used for both daytime structure support and social companionship, particularly benefiting people with dementia by providing structure in earlier stages of the disease and comfort in later stages. This research introduces the concept of Huggable Integrated SARs (HI-SAR): a novel subtype of SARs combining a soft, comforting, huggable form with integrated socially assistive functionalities, such as verbal prompts for daytime structure, interactive companionship, and activity monitoring via sensor data, enabling the possibility of more context-aware interaction. While HI-SARs have shown promise in Asian care contexts, real-world application and potential in diverse long-term care contexts remain limited and underexplored. This research investigates the potential of HI-SARs in Dutch healthcare settings (eldercare, disability care, and rehabilitation) through three studies conducted between September 2023 and December 2024. Study I examined HI-SAR functions and integration in Dutch care practice via focus groups with professionals, innovation managers, and older adults (N = 36). Study II explored user preferences through sessions with clients with intellectual disabilities and professionals (N = 32). Study III involved two case studies in care settings with clients and caregivers (N = 4). Results indicate that HI-SARs were generally well-received by professionals and older adults, who appreciated their support for daily routines and social engagement, particularly for clients with cognitive disabilities such as dementia. However, concerns were raised about hygiene, the functioning of activity monitoring, and limited interactivity. Based on these findings, we recommend four design and implementation strategies to improve the effectiveness of HI-SARs: (1) integrating personalisation options such as customizable voices to increase user acceptance; (2) optimising activity monitoring by simplifying data output and using sensor input more proactively to trigger interactions; (3) considering persons with cognitive impairments as a first target user group; and (4) encouraging individual use to enhance hygiene and tailor experiences to client needs. Overall, this research demonstrates the potential of HI-SARs in diverse long-term care settings, although further research is needed to explore their applicability, usability, and long-term impact.

随着人口老龄化和护理人员数量的减少,社会辅助机器人等护理技术为卫生保健工作者和老年人提供了创新的解决方案,支持他们就地养老。其中,SARs用于日间结构支持和社会陪伴,特别是通过在疾病的早期阶段提供结构和在后期提供舒适,使痴呆症患者受益。本研究引入了可拥抱集成SARs (HI-SAR)的概念:一种新型SARs亚型,结合了柔软、舒适、可拥抱的形式和集成的社交辅助功能,如白天结构的口头提示、互动陪伴和通过传感器数据的活动监测,使更多情境感知交互成为可能。虽然HI-SARs在亚洲护理环境中显示出希望,但在各种长期护理环境中的实际应用和潜力仍然有限且未得到充分开发。本研究通过2023年9月至2024年12月期间进行的三项研究,调查了荷兰医疗保健机构(老年人护理、残疾人护理和康复)中HI-SARs的潜力。研究1通过专业人员、创新经理和老年人(N = 36)的焦点小组,检验了HI-SAR功能和荷兰护理实践中的整合情况。研究II通过与智障客户和专业人士(N = 32)的会话探讨用户偏好。研究III涉及两个案例研究在护理设置与客户和护理人员(N = 4)。结果表明,HI-SARs普遍受到专业人士和老年人的欢迎,他们感谢他们对日常生活和社交活动的支持,特别是对患有认知障碍(如痴呆症)的客户。然而,人们对卫生、活动监测功能和有限的互动性提出了担忧。基于这些发现,我们推荐了四种设计和实施策略来提高hi - sar的有效性:(1)整合个性化选项,如可定制的声音,以提高用户的接受度;(2)通过简化数据输出和更主动地使用传感器输入来触发交互来优化活动监测;(3)将认知障碍人士作为第一目标用户群体;(4)鼓励个人使用,以加强卫生,并根据客户需求定制体验。总的来说,这项研究表明了HI-SARs在不同长期护理环境中的潜力,尽管需要进一步的研究来探索它们的适用性、可用性和长期影响。
{"title":"Huggable integrated socially assistive robots: exploring the potential and challenges for sustainable use in long-term care contexts.","authors":"B M Hofstede, S Ipakchian Askari, T R C van Hoesel, R H Cuijpers, L P de Witte, W A IJsselsteijn, H H Nap","doi":"10.3389/frobt.2025.1646353","DOIUrl":"10.3389/frobt.2025.1646353","url":null,"abstract":"<p><p>With ageing populations and decreasing numbers of care personnel, care technologies such as socially assistive robots offer innovative solutions for healthcare workers and older adults, supporting ageing in place. Among others, SARs are used for both daytime structure support and social companionship, particularly benefiting people with dementia by providing structure in earlier stages of the disease and comfort in later stages. This research introduces the concept of Huggable Integrated SARs (HI-SAR): a novel subtype of SARs combining a soft, comforting, huggable form with integrated socially assistive functionalities, such as verbal prompts for daytime structure, interactive companionship, and activity monitoring via sensor data, enabling the possibility of more context-aware interaction. While HI-SARs have shown promise in Asian care contexts, real-world application and potential in diverse long-term care contexts remain limited and underexplored. This research investigates the potential of HI-SARs in Dutch healthcare settings (eldercare, disability care, and rehabilitation) through three studies conducted between September 2023 and December 2024. Study I examined HI-SAR functions and integration in Dutch care practice via focus groups with professionals, innovation managers, and older adults (N = 36). Study II explored user preferences through sessions with clients with intellectual disabilities and professionals (N = 32). Study III involved two case studies in care settings with clients and caregivers (N = 4). Results indicate that HI-SARs were generally well-received by professionals and older adults, who appreciated their support for daily routines and social engagement, particularly for clients with cognitive disabilities such as dementia. However, concerns were raised about hygiene, the functioning of activity monitoring, and limited interactivity. Based on these findings, we recommend four design and implementation strategies to improve the effectiveness of HI-SARs: (1) integrating personalisation options such as customizable voices to increase user acceptance; (2) optimising activity monitoring by simplifying data output and using sensor input more proactively to trigger interactions; (3) considering persons with cognitive impairments as a first target user group; and (4) encouraging individual use to enhance hygiene and tailor experiences to client needs. Overall, this research demonstrates the potential of HI-SARs in diverse long-term care settings, although further research is needed to explore their applicability, usability, and long-term impact.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1646353"},"PeriodicalIF":3.0,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12569430/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Should we get involved? impact of human collaboration and intervention on multi-robot teams. 我们应该参与进来吗?人类协作和干预对多机器人团队的影响。
IF 3 Q2 ROBOTICS Pub Date : 2025-10-15 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1526287
Joseph Bolarinwa, Manuel Giuliani, Paul Bremner

Introduction: The challenges encountered in the design of multi-robot teams (MRT) highlight the need for different levels of human involvement, creating human-in-the-loop multi-robot teams. By integrating human cognitive abilities with the functionalities of the robots in the MRT, we can enhance overall system performance. Designing such a human-in-the-loop MRT requires several decisions based on the specific context of application. Before implementing these systems in real-world scenarios, it is essential to model and simulate the various components of the MRT to evaluate their impact on performance and the different roles a human operator might play.

Methods: We developed a simulation framework for a human-in-the-loop MRT using the Java Agent DEvelopment framework (JADE) and investigated the effects of different numbers of robots in the MRT, MRT architectures, and levels of human involvement (human collaboration and human intervention) on performance metrics.

Results: Results show that task execution outcomes and request completion times (RCT) improve with an increasing number of robots in the MRT. Human collaboration reduced the RCT, while human intervention increased the RCT, regardless of the number of robots in the MRT. The effect of system architecture was only significant when the number of robots in the MRT was low.

Discussion: This study demonstrates that both the number of robots in a multi-robot team (MRT) and the inclusion of a human in the loop significantly influence system performance. The findings also highlight the value of simulation as a cost- and time-efficiency strategy to evaluate MRT configurations prior to real-world implementation.

简介:在多机器人团队(MRT)的设计中遇到的挑战突出了对不同程度的人类参与的需求,创建人在环的多机器人团队。通过将人类的认知能力与捷运机器人的功能相结合,我们可以提高整个系统的性能。设计这样一个“人在环路”的捷运系统需要根据具体的应用环境做出若干决定。在将这些系统应用于实际场景之前,有必要对MRT的各个组件进行建模和模拟,以评估它们对性能的影响以及人类操作员可能扮演的不同角色。方法:我们使用Java代理开发框架(JADE)开发了一个人在环MRT的仿真框架,并研究了MRT中不同数量的机器人、MRT架构和人类参与水平(人类协作和人类干预)对性能指标的影响。结果:结果表明,任务执行结果和请求完成时间(RCT)随着MRT中机器人数量的增加而改善。无论捷运中机器人的数量如何,人类合作降低了RCT,而人类干预增加了RCT。系统架构的影响只有在地铁中机器人数量较少时才显着。讨论:本研究表明,多机器人团队(MRT)中的机器人数量和在回路中包含人都会显著影响系统性能。研究结果还强调了模拟作为成本和时间效率策略的价值,可以在实际实施之前评估MRT配置。
{"title":"Should we get involved? impact of human collaboration and intervention on multi-robot teams.","authors":"Joseph Bolarinwa, Manuel Giuliani, Paul Bremner","doi":"10.3389/frobt.2025.1526287","DOIUrl":"10.3389/frobt.2025.1526287","url":null,"abstract":"<p><strong>Introduction: </strong>The challenges encountered in the design of multi-robot teams (MRT) highlight the need for different levels of human involvement, creating human-in-the-loop multi-robot teams. By integrating human cognitive abilities with the functionalities of the robots in the MRT, we can enhance overall system performance. Designing such a human-in-the-loop MRT requires several decisions based on the specific context of application. Before implementing these systems in real-world scenarios, it is essential to model and simulate the various components of the MRT to evaluate their impact on performance and the different roles a human operator might play.</p><p><strong>Methods: </strong>We developed a simulation framework for a human-in-the-loop MRT using the Java Agent DEvelopment framework (JADE) and investigated the effects of different numbers of robots in the MRT, MRT architectures, and levels of human involvement (human collaboration and human intervention) on performance metrics.</p><p><strong>Results: </strong>Results show that task execution outcomes and request completion times (RCT) improve with an increasing number of robots in the MRT. Human collaboration reduced the RCT, while human intervention increased the RCT, regardless of the number of robots in the MRT. The effect of system architecture was only significant when the number of robots in the MRT was low.</p><p><strong>Discussion: </strong>This study demonstrates that both the number of robots in a multi-robot team (MRT) and the inclusion of a human in the loop significantly influence system performance. The findings also highlight the value of simulation as a cost- and time-efficiency strategy to evaluate MRT configurations prior to real-world implementation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1526287"},"PeriodicalIF":3.0,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12569544/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Robotics and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1