首页 > 最新文献

Frontiers in Robotics and AI最新文献

英文 中文
Editorial: Assistance personalization/customization for human locomotion tasks by using wearable lower-limb robotic devices. 社论:利用可穿戴下肢机器人设备为人类运动任务提供个性化/定制的辅助。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-07-03 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1448100
Qiang Jason Zhang, Xuefeng Bao, Zhao Guo, Ge Lv, Myunghee Kim
{"title":"Editorial: Assistance personalization/customization for human locomotion tasks by using wearable lower-limb robotic devices.","authors":"Qiang Jason Zhang, Xuefeng Bao, Zhao Guo, Ge Lv, Myunghee Kim","doi":"10.3389/frobt.2024.1448100","DOIUrl":"https://doi.org/10.3389/frobt.2024.1448100","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11252855/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum: Assimilation of socially assistive robots by older adults: an interplay of uses, constraints and outcomes. 更正:老年人对社会辅助机器人的同化:使用、限制和结果的相互作用。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-06-28 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1438912
Oded Zafrani, Galit Nimrod, Maya Krakovski, Shikhar Kumar, Simona Bar-Haim, Yael Edan

[This corrects the article DOI: 10.3389/frobt.2024.1337380.].

[This corrects the article DOI: 10.3389/frobt.2024.1337380.].
{"title":"Corrigendum: Assimilation of socially assistive robots by older adults: an interplay of uses, constraints and outcomes.","authors":"Oded Zafrani, Galit Nimrod, Maya Krakovski, Shikhar Kumar, Simona Bar-Haim, Yael Edan","doi":"10.3389/frobt.2024.1438912","DOIUrl":"10.3389/frobt.2024.1438912","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frobt.2024.1337380.].</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11247413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive robotic system for the inspection of aerospace slat actuator mount. 自适应机器人系统,用于检测航空板条致动器支架。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-06-27 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1423319
Nour M Morsi, Mario Mata, Colin S Harrison, David Semple

Introduction: Robotics uptake in the aerospace industry is low, mainly due to the low-volume/high-accuracy production that aerospace manufacturers require. Furthermore, aerospace manufacturing and assembly sites are often unstructured environments not specifically suitable for robots to operate in. Methods: This paper introduces a robotic visual inspection system using off-the-shelf components able to inspect the mounting holes for wing slat actuators without the need for fixed-coordinate programming; the part just needs to be left within reach of the robot. Our system sets one of the opposed pairs of mounting holes as a reference (the "datum") and then compares the tilt of all other pairs of mounting holes with respect to it. Under the assumption that any deviation in the mounting hole tilt is not systematic but due to normal manufacturing tolerances, our system will either guarantee the correct alignment of all mounting holes or highlight the existence of misaligned holes. Results and Discussion: Computer-vision tilt measurements are performed with an error of below 0.03° using custom optimization for the sub-pixel determination of the center and radius of the mounting holes. The error introduced by the robot's motion from the datum to each of the remaining hole pairs is compensated by moving back to the datum and fixing the orientation again before moving to inspect the next hole pair. This error is estimated to be approximately 0.05°, taking the total tilt error estimation for any mounting hole pair to be 0.08° with respect to the datum. This is confirmed by manually measuring the tilt of the hole pairs using a clock gauge on a calibrated table (not used during normal operation).

导言:航空航天业对机器人的使用率很低,这主要是由于航空航天制造商需要低产量/高精度的生产。此外,航空航天制造和装配现场通常是非结构化的环境,并不特别适合机器人操作。方法:本文介绍了一种使用现成组件的机器人视觉检测系统,该系统能够检测机翼板条致动器的安装孔,无需进行固定坐标编程;只需将零件放在机器人可触及的范围内即可。我们的系统将一对相对的安装孔中的一个设置为基准("基准"),然后比较所有其他安装孔的倾斜度。假设安装孔倾斜度的任何偏差都不是系统性的,而是由于正常的制造公差造成的,那么我们的系统要么能保证所有安装孔正确对齐,要么能突出显示存在错位的安装孔。结果与讨论:计算机视觉倾斜测量的误差低于 0.03°,采用定制优化方法确定安装孔的中心和半径。机器人从基准点移动到其余各孔对时产生的误差,可通过移动回基准点并在移动到下一个孔对之前再次固定方向来补偿。这个误差估计约为 0.05°,因此任何安装孔对的总倾斜误差估计为 0.08°。这一点可以通过在校准过的工作台上使用钟表规手动测量孔对的倾斜度来确认(正常运行时不使用)。
{"title":"Adaptive robotic system for the inspection of aerospace slat actuator mount.","authors":"Nour M Morsi, Mario Mata, Colin S Harrison, David Semple","doi":"10.3389/frobt.2024.1423319","DOIUrl":"10.3389/frobt.2024.1423319","url":null,"abstract":"<p><p><b>Introduction:</b> Robotics uptake in the aerospace industry is low, mainly due to the low-volume/high-accuracy production that aerospace manufacturers require. Furthermore, aerospace manufacturing and assembly sites are often unstructured environments not specifically suitable for robots to operate in. <b>Methods:</b> This paper introduces a robotic visual inspection system using off-the-shelf components able to inspect the mounting holes for wing slat actuators without the need for fixed-coordinate programming; the part just needs to be left within reach of the robot. Our system sets one of the opposed pairs of mounting holes as a reference (the \"datum\") and then compares the tilt of all other pairs of mounting holes with respect to it. Under the assumption that any deviation in the mounting hole tilt is not systematic but due to normal manufacturing tolerances, our system will either guarantee the correct alignment of all mounting holes or highlight the existence of misaligned holes. <b>Results and Discussion:</b> Computer-vision tilt measurements are performed with an error of below 0.03° using custom optimization for the sub-pixel determination of the center and radius of the mounting holes. The error introduced by the robot's motion from the datum to each of the remaining hole pairs is compensated by moving back to the datum and fixing the orientation again before moving to inspect the next hole pair. This error is estimated to be approximately 0.05°, taking the total tilt error estimation for any mounting hole pair to be 0.08° with respect to the datum. This is confirmed by manually measuring the tilt of the hole pairs using a clock gauge on a calibrated table (not used during normal operation).</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11237185/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141591729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What helps, what hinders?-Focus group findings on barriers and facilitators for mobile service robot use in a psychosocial group therapy for people with dementia. 移动服务机器人在痴呆症患者社会心理团体治疗中的使用障碍和促进因素--焦点小组的研究结果。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-06-21 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1258847
Catharina Wasic, Robert Erzgräber, Manja Unger-Büttner, Carolin Donath, Hans-Joachim Böhme, Elmar Graessel

Introduction: Many countries are facing a shortage of healthcare workers. Furthermore, healthcare workers are experiencing many stressors, resulting in psychological issues, impaired health, and increased intentions to leave the workplace. In recent years, different technologies have been implemented to lighten workload on healthcare workers, such as electronic patient files. Robotic solutions are still rather uncommon. To help with acceptance and actual use of robots their functionalities should correspond to the users' needs.

Method: In the pilot study Care4All-Initial, we developed and field-tested applications for a mobile service robot in a psychosocial, multimodal group therapy for people with dementia. To guide the process and assess possible facilitators and barriers, we conducted a reoccurring focus group including people with dementia, therapists, professional caregivers as well as researchers from different disciplines with a user-centered design approach. The focus group suggested and reviewed applications and discussed ethical implications. We recorded the focus group discussions in writing and used content analysis.

Results: The focus group discussed 15 different topics regarding ethical concerns that we used as a framework for the research project: Ethical facilitators were respect for the autonomy of the people with dementia and their proxies regarding participating and data sharing. Furthermore, the robot had to be useful for the therapists and attendees. Ethical barriers were the deception and possible harm of the people with dementia or therapists. The focus group suggested 32 different applications. We implemented 13 applications that centered on the robot interacting with the people with dementia and lightening the workload off the therapists. The implemented applications were facilitated through utilizing existing hard- and software and building on applications. Barriers to implementation were due to hardware, software, or applications not fitting the scope of the project.

Discussion: To prevent barriers of robot employment in a group therapy for people with dementia, the robot's applications have to be developed sufficiently for a flawless and safe use, the use of the robot should not cause irritation or agitation, but rather be meaningful and useful to its users. To facilitate the development sufficient time, money, expertise and planning is essential.

导言:许多国家都面临着医护人员短缺的问题。此外,医护人员还承受着许多压力,导致心理问题、健康受损和离职意向增加。近年来,人们采用了不同的技术来减轻医护人员的工作量,例如电子病历。机器人解决方案仍相当少见。为了帮助人们接受和实际使用机器人,其功能应符合用户的需求:在 "全民护理"(Care4All-Initial)试点研究中,我们开发并实地测试了移动服务机器人在痴呆症患者社会心理多模式集体治疗中的应用。为了指导这一过程并评估可能存在的促进因素和障碍,我们采用以用户为中心的设计方法开展了一个重复性焦点小组,成员包括痴呆症患者、治疗师、专业护理人员以及来自不同学科的研究人员。焦点小组建议并审查了各种应用,还讨论了伦理方面的影响。我们对焦点小组的讨论进行了书面记录,并采用了内容分析法:焦点小组讨论了 15 个不同的伦理问题,我们将其作为研究项目的框架:伦理促进因素包括尊重痴呆症患者及其代理人在参与和数据共享方面的自主权。此外,机器人必须对治疗师和与会者有用。伦理障碍则是对痴呆症患者或治疗师的欺骗和可能伤害。焦点小组提出了 32 种不同的应用。我们实施了 13 种应用,其核心是让机器人与痴呆症患者互动,减轻治疗师的工作量。通过利用现有的硬件和软件,并在应用的基础上加以改进,这些应用得以顺利实施。实施过程中遇到的障碍主要是硬件、软件或应用程序与项目范围不符:为了防止在痴呆症患者的集体治疗中使用机器人时出现障碍,必须充分开发机器人的应用程序,以确保其使用的完美性和安全性,机器人的使用不应造成刺激或躁动,而应对用户有意义且有用。为了便于开发,充足的时间、资金、专业知识和规划是必不可少的。
{"title":"What helps, what hinders?-Focus group findings on barriers and facilitators for mobile service robot use in a psychosocial group therapy for people with dementia.","authors":"Catharina Wasic, Robert Erzgräber, Manja Unger-Büttner, Carolin Donath, Hans-Joachim Böhme, Elmar Graessel","doi":"10.3389/frobt.2024.1258847","DOIUrl":"10.3389/frobt.2024.1258847","url":null,"abstract":"<p><strong>Introduction: </strong>Many countries are facing a shortage of healthcare workers. Furthermore, healthcare workers are experiencing many stressors, resulting in psychological issues, impaired health, and increased intentions to leave the workplace. In recent years, different technologies have been implemented to lighten workload on healthcare workers, such as electronic patient files. Robotic solutions are still rather uncommon. To help with acceptance and actual use of robots their functionalities should correspond to the users' needs.</p><p><strong>Method: </strong>In the pilot study Care4All-Initial, we developed and field-tested applications for a mobile service robot in a psychosocial, multimodal group therapy for people with dementia. To guide the process and assess possible facilitators and barriers, we conducted a reoccurring focus group including people with dementia, therapists, professional caregivers as well as researchers from different disciplines with a user-centered design approach. The focus group suggested and reviewed applications and discussed ethical implications. We recorded the focus group discussions in writing and used content analysis.</p><p><strong>Results: </strong>The focus group discussed 15 different topics regarding ethical concerns that we used as a framework for the research project: Ethical facilitators were respect for the autonomy of the people with dementia and their proxies regarding participating and data sharing. Furthermore, the robot had to be useful for the therapists and attendees. Ethical barriers were the deception and possible harm of the people with dementia or therapists. The focus group suggested 32 different applications. We implemented 13 applications that centered on the robot interacting with the people with dementia and lightening the workload off the therapists. The implemented applications were facilitated through utilizing existing hard- and software and building on applications. Barriers to implementation were due to hardware, software, or applications not fitting the scope of the project.</p><p><strong>Discussion: </strong>To prevent barriers of robot employment in a group therapy for people with dementia, the robot's applications have to be developed sufficiently for a flawless and safe use, the use of the robot should not cause irritation or agitation, but rather be meaningful and useful to its users. To facilitate the development sufficient time, money, expertise and planning is essential.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11224299/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing unmanned ground vehicle performance in SAR operations: integrated gesture-control and deep learning framework for optimised victim detection. 提高无人地面飞行器在搜救行动中的性能:优化受害者探测的手势控制和深度学习综合框架。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-06-18 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1356345
Muhammad Hamza Zafar, Syed Kumayl Raza Moosavi, Filippo Sanfilippo

In this study, we address the critical need for enhanced situational awareness and victim detection capabilities in Search and Rescue (SAR) operations amidst disasters. Traditional unmanned ground vehicles (UGVs) often struggle in such chaotic environments due to their limited manoeuvrability and the challenge of distinguishing victims from debris. Recognising these gaps, our research introduces a novel technological framework that integrates advanced gesture-recognition with cutting-edge deep learning for camera-based victim identification, specifically designed to empower UGVs in disaster scenarios. At the core of our methodology is the development and implementation of the Meerkat Optimization Algorithm-Stacked Convolutional Neural Network-Bi-Long Short Term Memory-Gated Recurrent Unit (MOA-SConv-Bi-LSTM-GRU) model, which sets a new benchmark for hand gesture detection with its remarkable performance metrics: accuracy, precision, recall, and F1-score all approximately 0.9866. This model enables intuitive, real-time control of UGVs through hand gestures, allowing for precise navigation in confined and obstacle-ridden spaces, which is vital for effective SAR operations. Furthermore, we leverage the capabilities of the latest YOLOv8 deep learning model, trained on specialised datasets to accurately detect human victims under a wide range of challenging conditions, such as varying occlusions, lighting, and perspectives. Our comprehensive testing in simulated emergency scenarios validates the effectiveness of our integrated approach. The system demonstrated exceptional proficiency in navigating through obstructions and rapidly locating victims, even in environments with visual impairments like smoke, clutter, and poor lighting. Our study not only highlights the critical gaps in current SAR response capabilities but also offers a pioneering solution through a synergistic blend of gesture-based control, deep learning, and purpose-built robotics. The key findings underscore the potential of our integrated technological framework to significantly enhance UGV performance in disaster scenarios, thereby optimising life-saving outcomes when time is of the essence. This research paves the way for future advancements in SAR technology, with the promise of more efficient and reliable rescue operations in the face of disaster.

在本研究中,我们探讨了在灾难中开展搜救(SAR)行动时对增强态势感知和受害者探测能力的迫切需求。传统的无人地面运载工具(UGV)由于其有限的机动性和将受害者与碎片区分开来的挑战,往往在这种混乱的环境中举步维艰。认识到这些差距后,我们的研究引入了一个新颖的技术框架,该框架将先进的手势识别与基于摄像头的受害者识别的尖端深度学习相结合,专门设计用于在灾难场景中增强 UGV 的能力。我们方法论的核心是开发和实施 "猫鼬优化算法-堆积卷积神经网络-双长短期记忆门控递归单元(MOA-SConv-Bi-LSTM-GRU)"模型,该模型以其卓越的性能指标为手势检测树立了新的标杆:准确率、精确度、召回率和 F1 分数均约为 0.9866。该模型可通过手势对 UGV 进行直观、实时的控制,从而在狭窄和障碍物密集的空间内实现精确导航,这对有效的搜救行动至关重要。此外,我们还利用了最新的 YOLOv8 深度学习模型的功能,该模型是在专门的数据集上训练出来的,能够在各种具有挑战性的条件下(如不同的遮挡物、光线和视角)准确地检测到人类受害者。我们在模拟应急场景中进行的全面测试验证了我们综合方法的有效性。即使在烟雾、杂乱和光线不足等视觉障碍环境中,该系统也能非常熟练地穿过障碍物并快速定位受害者。我们的研究不仅凸显了当前搜救响应能力的关键差距,还通过基于手势的控制、深度学习和专用机器人技术的协同融合,提供了一种开创性的解决方案。主要研究结果强调了我们的集成技术框架在灾难场景中显著提高无人潜航器性能的潜力,从而在时间紧迫的情况下优化救生效果。这项研究为未来搜救技术的进步铺平了道路,有望在灾难面前实现更高效、更可靠的救援行动。
{"title":"Enhancing unmanned ground vehicle performance in SAR operations: integrated gesture-control and deep learning framework for optimised victim detection.","authors":"Muhammad Hamza Zafar, Syed Kumayl Raza Moosavi, Filippo Sanfilippo","doi":"10.3389/frobt.2024.1356345","DOIUrl":"10.3389/frobt.2024.1356345","url":null,"abstract":"<p><p>In this study, we address the critical need for enhanced situational awareness and victim detection capabilities in Search and Rescue (SAR) operations amidst disasters. Traditional unmanned ground vehicles (UGVs) often struggle in such chaotic environments due to their limited manoeuvrability and the challenge of distinguishing victims from debris. Recognising these gaps, our research introduces a novel technological framework that integrates advanced gesture-recognition with cutting-edge deep learning for camera-based victim identification, specifically designed to empower UGVs in disaster scenarios. At the core of our methodology is the development and implementation of the Meerkat Optimization Algorithm-Stacked Convolutional Neural Network-Bi-Long Short Term Memory-Gated Recurrent Unit (MOA-SConv-Bi-LSTM-GRU) model, which sets a new benchmark for hand gesture detection with its remarkable performance metrics: accuracy, precision, recall, and F1-score all approximately 0.9866. This model enables intuitive, real-time control of UGVs through hand gestures, allowing for precise navigation in confined and obstacle-ridden spaces, which is vital for effective SAR operations. Furthermore, we leverage the capabilities of the latest YOLOv8 deep learning model, trained on specialised datasets to accurately detect human victims under a wide range of challenging conditions, such as varying occlusions, lighting, and perspectives. Our comprehensive testing in simulated emergency scenarios validates the effectiveness of our integrated approach. The system demonstrated exceptional proficiency in navigating through obstructions and rapidly locating victims, even in environments with visual impairments like smoke, clutter, and poor lighting. Our study not only highlights the critical gaps in current SAR response capabilities but also offers a pioneering solution through a synergistic blend of gesture-based control, deep learning, and purpose-built robotics. The key findings underscore the potential of our integrated technological framework to significantly enhance UGV performance in disaster scenarios, thereby optimising life-saving outcomes when time is of the essence. This research paves the way for future advancements in SAR technology, with the promise of more efficient and reliable rescue operations in the face of disaster.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11217714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141493865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid controller with neural network PID/FOPID operations for two-link rigid robot manipulator based on the zebra optimization algorithm 基于斑马优化算法的双连杆刚性机器人机械手神经网络 PID/FOPID 混合控制器
IF 3.4 Q2 Computer Science Pub Date : 2024-06-14 DOI: 10.3389/frobt.2024.1386968
Mohamed Jasim Mohamed, B. K. Oleiwi, Ahmad Taher Azar, A. Mahlous
The performance of the robotic manipulator is negatively impacted by outside disturbances and uncertain parameters. The system’s variables are also highly coupled, complex, and nonlinear, indicating that it is a multi-input, multi-output system. Therefore, it is necessary to develop a controller that can control the variables in the system in order to handle these complications. This work proposes six control structures based on neural networks (NNs) with proportional integral derivative (PID) and fractional-order PID (FOPID) controllers to operate a 2-link rigid robot manipulator (2-LRRM) for trajectory tracking. These are named as set-point-weighted PID (W-PID), set-point weighted FOPID (W-FOPID), recurrent neural network (RNN)-like PID (RNNPID), RNN-like FOPID (RNN-FOPID), NN+PID, and NN+FOPID controllers. The zebra optimization algorithm (ZOA) was used to adjust the parameters of the proposed controllers while reducing the integral-time-square error (ITSE). A new objective function was proposed for tuning to generate controllers with minimal chattering in the control signal. After implementing the proposed controller designs, a comparative robustness study was conducted among these controllers by altering the initial conditions, disturbances, and model uncertainties. The simulation results demonstrate that the NN+FOPID controller has the best trajectory tracking performance with the minimum ITSE and best robustness against changes in the initial states, external disturbances, and parameter uncertainties compared to the other controllers.
机器人机械手的性能受到外部干扰和不确定参数的负面影响。该系统的变量还具有高度耦合性、复杂性和非线性,表明它是一个多输入、多输出系统。因此,有必要开发一种能够控制该系统变量的控制器,以处理这些复杂问题。本研究提出了六种基于神经网络 (NN) 的控制结构,其中包括比例积分导数 (PID) 和分数阶 PID (FOPID) 控制器,用于操作双连杆刚性机器人机械手 (2-LRRM),以实现轨迹跟踪。这些控制器分别被命名为设定点加权 PID (W-PID)、设定点加权 FOPID (W-FOPID)、循环神经网络 (RNN) 类 PID (RNNPID)、RNN 类 FOPID (RNN-FOPID)、NN+PID 和 NN+FOPID 控制器。斑马优化算法(ZOA)用于调整所提控制器的参数,同时降低积分-时间-平方误差(ITSE)。还提出了一种新的目标函数,用于生成控制信号颤振最小的控制器。在实施所提出的控制器设计后,通过改变初始条件、干扰和模型不确定性,对这些控制器进行了鲁棒性比较研究。仿真结果表明,与其他控制器相比,NN+FOPID 控制器具有最佳的轨迹跟踪性能、最小的 ITSE 以及对初始状态、外部干扰和参数不确定性变化的最佳鲁棒性。
{"title":"Hybrid controller with neural network PID/FOPID operations for two-link rigid robot manipulator based on the zebra optimization algorithm","authors":"Mohamed Jasim Mohamed, B. K. Oleiwi, Ahmad Taher Azar, A. Mahlous","doi":"10.3389/frobt.2024.1386968","DOIUrl":"https://doi.org/10.3389/frobt.2024.1386968","url":null,"abstract":"The performance of the robotic manipulator is negatively impacted by outside disturbances and uncertain parameters. The system’s variables are also highly coupled, complex, and nonlinear, indicating that it is a multi-input, multi-output system. Therefore, it is necessary to develop a controller that can control the variables in the system in order to handle these complications. This work proposes six control structures based on neural networks (NNs) with proportional integral derivative (PID) and fractional-order PID (FOPID) controllers to operate a 2-link rigid robot manipulator (2-LRRM) for trajectory tracking. These are named as set-point-weighted PID (W-PID), set-point weighted FOPID (W-FOPID), recurrent neural network (RNN)-like PID (RNNPID), RNN-like FOPID (RNN-FOPID), NN+PID, and NN+FOPID controllers. The zebra optimization algorithm (ZOA) was used to adjust the parameters of the proposed controllers while reducing the integral-time-square error (ITSE). A new objective function was proposed for tuning to generate controllers with minimal chattering in the control signal. After implementing the proposed controller designs, a comparative robustness study was conducted among these controllers by altering the initial conditions, disturbances, and model uncertainties. The simulation results demonstrate that the NN+FOPID controller has the best trajectory tracking performance with the minimum ITSE and best robustness against changes in the initial states, external disturbances, and parameter uncertainties compared to the other controllers.","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141340156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Silicone-layered waterproof electrohydraulic soft actuators for bio-inspired underwater robots 用于生物启发式水下机器人的硅胶层防水电液软致动器
IF 3.4 Q2 Computer Science Pub Date : 2024-06-14 DOI: 10.3389/frobt.2024.1298624
Takumi Shibuya, Shuya Watanabe, Jun Shintake
Electrohydraulic soft actuators are a promising soft actuation technology for constructing bio-inspired underwater robots owing to the features of this technology such as large deformations and forces, fast responses, and high electromechanical efficiencies. However, this actuation technology requires high voltages, thereby limiting the use of these actuators in water and hindering the development of underwater robots. This paper describes a method for creating bio-inspired underwater robots using silicone-layered electrohydraulic soft actuators. The silicone layer functions as an insulator, enabling the application of high voltages underwater. Moreover, bending and linear actuation can be achieved by applying the silicone layers on one or both sides of the actuator. As a proof of concept, bending and linear actuators with planar dimensions of 20 mm × 40 mm (length × width) are fabricated and characterized. Underwater actuation is observed in both types of actuators. The bending actuators exhibit a bending angle and blocked force of 39.0° and 9.6 mN, respectively, at an applied voltage of 10 kV. Further, the linear actuators show a contraction strain and blocked force of 6.6% and 956.1 mN, respectively, at an applied voltage of 10 kV. These actuators are tested at a depth near the surface of water. This ensured that they can operate at least at that depth. The actuators are subsequently used to implement various soft robotic devices such as a ray robot, a fish robot, a water-surface sliding robot, and a gripper. All of the robots exhibit movements as expected; up to 31.2 mm/s (0.91 body length/s) of locomotion speed is achieved by the swimming robots and a retrieve and place task is performed by the gripper. The results obtained in this study indicate the successful implementation of the actuator concept and its high potential for constructing bio-inspired underwater robots and soft robotics applications.
电液软致动器具有变形和受力大、响应快、机电效率高等特点,是一种很有前途的软致动技术,可用于构建生物启发的水下机器人。然而,这种致动技术需要高电压,从而限制了这些致动器在水中的使用,阻碍了水下机器人的开发。本文介绍了一种利用硅胶层电液软致动器制造生物启发式水下机器人的方法。硅胶层具有绝缘体的功能,可在水下施加高电压。此外,通过在致动器的一侧或两侧涂上硅胶层,可以实现弯曲和线性致动。作为概念验证,我们制作了平面尺寸为 20 毫米 × 40 毫米(长 × 宽)的弯曲和线性致动器,并对其进行了表征。两种致动器都能在水下致动。在 10 kV 电压下,弯曲致动器的弯曲角度和阻滞力分别为 39.0° 和 9.6 mN。此外,线性致动器在 10 kV 电压下的收缩应变和阻滞力分别为 6.6% 和 956.1 mN。这些致动器是在接近水面的深度进行测试的。这确保了它们至少能在该深度工作。这些致动器随后被用于实现各种软机器人装置,如射线机器人、鱼机器人、水面滑动机器人和抓手。所有机器人的运动都符合预期;游泳机器人的运动速度可达 31.2 毫米/秒(0.91 体长/秒),而抓手则能完成捞取和放置任务。这项研究获得的结果表明,成功地实现了致动器概念,并且在构建生物启发水下机器人和软机器人应用方面具有巨大潜力。
{"title":"Silicone-layered waterproof electrohydraulic soft actuators for bio-inspired underwater robots","authors":"Takumi Shibuya, Shuya Watanabe, Jun Shintake","doi":"10.3389/frobt.2024.1298624","DOIUrl":"https://doi.org/10.3389/frobt.2024.1298624","url":null,"abstract":"Electrohydraulic soft actuators are a promising soft actuation technology for constructing bio-inspired underwater robots owing to the features of this technology such as large deformations and forces, fast responses, and high electromechanical efficiencies. However, this actuation technology requires high voltages, thereby limiting the use of these actuators in water and hindering the development of underwater robots. This paper describes a method for creating bio-inspired underwater robots using silicone-layered electrohydraulic soft actuators. The silicone layer functions as an insulator, enabling the application of high voltages underwater. Moreover, bending and linear actuation can be achieved by applying the silicone layers on one or both sides of the actuator. As a proof of concept, bending and linear actuators with planar dimensions of 20 mm × 40 mm (length × width) are fabricated and characterized. Underwater actuation is observed in both types of actuators. The bending actuators exhibit a bending angle and blocked force of 39.0° and 9.6 mN, respectively, at an applied voltage of 10 kV. Further, the linear actuators show a contraction strain and blocked force of 6.6% and 956.1 mN, respectively, at an applied voltage of 10 kV. These actuators are tested at a depth near the surface of water. This ensured that they can operate at least at that depth. The actuators are subsequently used to implement various soft robotic devices such as a ray robot, a fish robot, a water-surface sliding robot, and a gripper. All of the robots exhibit movements as expected; up to 31.2 mm/s (0.91 body length/s) of locomotion speed is achieved by the swimming robots and a retrieve and place task is performed by the gripper. The results obtained in this study indicate the successful implementation of the actuator concept and its high potential for constructing bio-inspired underwater robots and soft robotics applications.","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141338261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning manufacturing computer vision systems using tiny YOLOv4 利用微型 YOLOv4 学习制造计算机视觉系统
IF 3.4 Q2 Computer Science Pub Date : 2024-06-12 DOI: 10.3389/frobt.2024.1331249
Adán Medina, Russel Bradley, Wenhao Xu, Pedro Ponce, Brian Anthony, Arturo Molina
Implementing and deploying advanced technologies are principal in improving manufacturing processes, signifying a transformative stride in the industrial sector. Computer vision plays a crucial innovation role during this technological advancement, demonstrating broad applicability and profound impact across various industrial operations. This pivotal technology is not merely an additive enhancement but a revolutionary approach that redefines quality control, automation, and operational efficiency parameters in manufacturing landscapes. By integrating computer vision, industries are positioned to optimize their current processes significantly and spearhead innovations that could set new standards for future industrial endeavors. However, the integration of computer vision in these contexts necessitates comprehensive training programs for operators, given this advanced system’s complexity and abstract nature. Historically, training modalities have grappled with the complexities of understanding concepts as advanced as computer vision. Despite these challenges, computer vision has recently surged to the forefront across various disciplines, attributed to its versatility and superior performance, often matching or exceeding the capabilities of other established technologies. Nonetheless, there is a noticeable knowledge gap among students, particularly in comprehending the application of Artificial Intelligence (AI) within Computer Vision. This disconnect underscores the need for an educational paradigm transcending traditional theoretical instruction. Cultivating a more practical understanding of the symbiotic relationship between AI and computer vision is essential. To address this, the current work proposes a project-based instructional approach to bridge the educational divide. This methodology will enable students to engage directly with the practical aspects of computer vision applications within AI. By guiding students through a hands-on project, they will learn how to effectively utilize a dataset, train an object detection model, and implement it within a microcomputer infrastructure. This immersive experience is intended to bolster theoretical knowledge and provide a practical understanding of deploying AI techniques within computer vision. The main goal is to equip students with a robust skill set that translates into practical acumen, preparing a competent workforce to navigate and innovate in the complex landscape of Industry 4.0. This approach emphasizes the criticality of adapting educational strategies to meet the evolving demands of advanced technological infrastructures. It ensures that emerging professionals are adept at harnessing the potential of transformative tools like computer vision in industrial settings.
实施和部署先进技术是改进生产流程的关键,标志着工业领域的变革性进步。在这一技术进步过程中,计算机视觉发挥着至关重要的创新作用,在各种工业操作中显示出广泛的适用性和深远的影响。这项关键技术不仅仅是一种附加增强技术,而是一种革命性的方法,它重新定义了制造领域的质量控制、自动化和运营效率参数。通过整合计算机视觉技术,各行各业都能显著优化其当前流程,并引领创新,为未来的工业努力设定新标准。然而,鉴于计算机视觉系统的复杂性和抽象性,要在这些环境中集成计算机视觉系统,就必须为操作员提供全面的培训计划。从历史上看,培训模式一直在努力解决理解像计算机视觉这样先进概念的复杂性问题。尽管存在这些挑战,计算机视觉最近还是在各个学科中崭露头角,这要归功于它的多功能性和卓越性能,它的性能通常可以与其他成熟技术相媲美,甚至超过它们。然而,学生之间存在着明显的知识差距,尤其是在理解人工智能(AI)在计算机视觉中的应用方面。这种脱节凸显了超越传统理论教学的教育范式的必要性。培养学生对人工智能与计算机视觉之间共生关系的实际理解至关重要。为了解决这个问题,目前的工作提出了一种基于项目的教学方法,以弥合教育鸿沟。这种方法能让学生直接参与人工智能中计算机视觉应用的实际操作。通过指导学生完成实践项目,他们将学习如何有效利用数据集、训练物体检测模型,并在微型计算机基础设施中加以实施。这种身临其境的体验旨在加强理论知识,并让学生切实了解如何在计算机视觉中部署人工智能技术。其主要目标是让学生掌握一套强大的技能,并将其转化为实际的敏锐度,为在工业 4.0 的复杂环境中进行导航和创新的合格劳动力做好准备。这种方法强调了调整教育战略以满足先进技术基础设施不断发展的需求的重要性。它能确保新兴专业人员善于在工业环境中利用计算机视觉等变革性工具的潜力。
{"title":"Learning manufacturing computer vision systems using tiny YOLOv4","authors":"Adán Medina, Russel Bradley, Wenhao Xu, Pedro Ponce, Brian Anthony, Arturo Molina","doi":"10.3389/frobt.2024.1331249","DOIUrl":"https://doi.org/10.3389/frobt.2024.1331249","url":null,"abstract":"Implementing and deploying advanced technologies are principal in improving manufacturing processes, signifying a transformative stride in the industrial sector. Computer vision plays a crucial innovation role during this technological advancement, demonstrating broad applicability and profound impact across various industrial operations. This pivotal technology is not merely an additive enhancement but a revolutionary approach that redefines quality control, automation, and operational efficiency parameters in manufacturing landscapes. By integrating computer vision, industries are positioned to optimize their current processes significantly and spearhead innovations that could set new standards for future industrial endeavors. However, the integration of computer vision in these contexts necessitates comprehensive training programs for operators, given this advanced system’s complexity and abstract nature. Historically, training modalities have grappled with the complexities of understanding concepts as advanced as computer vision. Despite these challenges, computer vision has recently surged to the forefront across various disciplines, attributed to its versatility and superior performance, often matching or exceeding the capabilities of other established technologies. Nonetheless, there is a noticeable knowledge gap among students, particularly in comprehending the application of Artificial Intelligence (AI) within Computer Vision. This disconnect underscores the need for an educational paradigm transcending traditional theoretical instruction. Cultivating a more practical understanding of the symbiotic relationship between AI and computer vision is essential. To address this, the current work proposes a project-based instructional approach to bridge the educational divide. This methodology will enable students to engage directly with the practical aspects of computer vision applications within AI. By guiding students through a hands-on project, they will learn how to effectively utilize a dataset, train an object detection model, and implement it within a microcomputer infrastructure. This immersive experience is intended to bolster theoretical knowledge and provide a practical understanding of deploying AI techniques within computer vision. The main goal is to equip students with a robust skill set that translates into practical acumen, preparing a competent workforce to navigate and innovate in the complex landscape of Industry 4.0. This approach emphasizes the criticality of adapting educational strategies to meet the evolving demands of advanced technological infrastructures. It ensures that emerging professionals are adept at harnessing the potential of transformative tools like computer vision in industrial settings.","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141353468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A navigated, robot-driven laser craniotomy tool for frameless depth electrode implantation. An in-vivo recovery animal study 用于无框架深度电极植入的导航机器人驱动激光开颅工具。体内复苏动物研究
IF 3.4 Q2 Computer Science Pub Date : 2024-06-12 DOI: 10.3389/frobt.2024.1355409
F. Winter, Patrick Pilz, Anne M. Kramer, Daniel Beer, Patrick Gono, M. Morawska, Johannes Hainfellner, Sigrid Klotz, M. Tomschik, Ekaterina Pataraia, Gilbert Hangel, C. Dorfer, Karl Roessler
Objectives: We recently introduced a frameless, navigated, robot-driven laser tool for depth electrode implantation as an alternative to frame-based procedures. This method has only been used in cadaver and non-recovery studies. This is the first study to test the robot-driven laser tool in an in vivo recovery animal study.Methods: A preoperative computed tomography (CT) scan was conducted to plan trajectories in sheep specimens. Burr hole craniotomies were performed using a frameless, navigated, robot-driven laser tool. Depth electrodes were implanted after cut-through detection was confirmed. The electrodes were cut at the skin level postoperatively. Postoperative imaging was performed to verify accuracy. Histopathological analysis was performed on the bone, dura, and cortex samples.Results: Fourteen depth electrodes were implanted in two sheep specimens. Anesthetic protocols did not show any intraoperative irregularities. One sheep was euthanized on the same day of the procedure while the other sheep remained alive for 1 week without neurological deficits. Postoperative MRI and CT showed no intracerebral bleeding, infarction, or unintended damage. The average bone thickness was 6.2 mm (range 4.1–8.0 mm). The angulation of the planned trajectories varied from 65.5° to 87.4°. The deviation of the entry point performed by the frameless laser beam ranged from 0.27 mm to 2.24 mm. The histopathological analysis did not reveal any damage associated with the laser beam.Conclusion: The novel robot-driven laser craniotomy tool showed promising results in this first in vivo recovery study. These findings indicate that laser craniotomies can be performed safely and that cut-through detection is reliable.
目的:我们最近推出了一种无框架、导航、机器人驱动的激光工具,用于深度电极植入,以替代基于框架的手术。这种方法只在尸体和非康复研究中使用过。这是首次在体内康复动物实验中测试机器人驱动激光工具的研究:方法:术前进行计算机断层扫描(CT),以规划绵羊标本的手术轨迹。使用无框架、导航、机器人驱动的激光工具进行毛刺孔开颅手术。在确认切穿检测后植入深度电极。术后在皮肤水平切开电极。术后进行成像以验证准确性。对骨、硬脑膜和皮质样本进行组织病理学分析:在两只绵羊标本中植入了 14 个深度电极。麻醉方案未显示任何术中异常。一只绵羊在手术当天被安乐死,另一只绵羊存活了一周,没有出现神经功能障碍。术后核磁共振成像和 CT 显示没有脑内出血、梗塞或意外损伤。平均骨厚度为 6.2 毫米(范围为 4.1-8.0 毫米)。计划轨迹的角度从65.5°到87.4°不等。无框架激光束进入点的偏差范围为 0.27 毫米至 2.24 毫米。组织病理学分析未发现任何与激光束相关的损伤:结论:新型机器人驱动激光开颅工具在首次活体恢复研究中显示出良好的效果。这些研究结果表明,激光开颅手术可以安全地进行,而且切口检测是可靠的。
{"title":"A navigated, robot-driven laser craniotomy tool for frameless depth electrode implantation. An in-vivo recovery animal study","authors":"F. Winter, Patrick Pilz, Anne M. Kramer, Daniel Beer, Patrick Gono, M. Morawska, Johannes Hainfellner, Sigrid Klotz, M. Tomschik, Ekaterina Pataraia, Gilbert Hangel, C. Dorfer, Karl Roessler","doi":"10.3389/frobt.2024.1355409","DOIUrl":"https://doi.org/10.3389/frobt.2024.1355409","url":null,"abstract":"Objectives: We recently introduced a frameless, navigated, robot-driven laser tool for depth electrode implantation as an alternative to frame-based procedures. This method has only been used in cadaver and non-recovery studies. This is the first study to test the robot-driven laser tool in an in vivo recovery animal study.Methods: A preoperative computed tomography (CT) scan was conducted to plan trajectories in sheep specimens. Burr hole craniotomies were performed using a frameless, navigated, robot-driven laser tool. Depth electrodes were implanted after cut-through detection was confirmed. The electrodes were cut at the skin level postoperatively. Postoperative imaging was performed to verify accuracy. Histopathological analysis was performed on the bone, dura, and cortex samples.Results: Fourteen depth electrodes were implanted in two sheep specimens. Anesthetic protocols did not show any intraoperative irregularities. One sheep was euthanized on the same day of the procedure while the other sheep remained alive for 1 week without neurological deficits. Postoperative MRI and CT showed no intracerebral bleeding, infarction, or unintended damage. The average bone thickness was 6.2 mm (range 4.1–8.0 mm). The angulation of the planned trajectories varied from 65.5° to 87.4°. The deviation of the entry point performed by the frameless laser beam ranged from 0.27 mm to 2.24 mm. The histopathological analysis did not reveal any damage associated with the laser beam.Conclusion: The novel robot-driven laser craniotomy tool showed promising results in this first in vivo recovery study. These findings indicate that laser craniotomies can be performed safely and that cut-through detection is reliable.","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141353587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Socially adaptive cognitive architecture for human-robot collaboration in industrial settings 工业环境中人与机器人协作的社会适应性认知架构
IF 3.4 Q2 Computer Science Pub Date : 2024-06-10 DOI: 10.3389/frobt.2024.1248646
Ismael T. Freire, Oscar Guerrero-Rosado, A. F. Amil, P. Verschure
This paper introduces DAC-HRC, a novel cognitive architecture designed to optimize human-robot collaboration (HRC) in industrial settings, particularly within the context of Industry 4.0. The architecture is grounded in the Distributed Adaptive Control theory and the principles of joint intentionality and interdependence, which are key to effective HRC. Joint intentionality refers to the shared goals and mutual understanding between a human and a robot, while interdependence emphasizes the reliance on each other’s capabilities to complete tasks. DAC-HRC is applied to a hybrid recycling plant for the disassembly and recycling of Waste Electrical and Electronic Equipment (WEEE) devices. The architecture incorporates several cognitive modules operating at different timescales and abstraction levels, fostering adaptive collaboration that is personalized to each human user. The effectiveness of DAC-HRC is demonstrated through several pilot studies, showcasing functionalities such as turn-taking interaction, personalized error-handling mechanisms, adaptive safety measures, and gesture-based communication. These features enhance human-robot collaboration in the recycling plant by promoting real-time robot adaptation to human needs and preferences. The DAC-HRC architecture aims to contribute to the development of a new HRC paradigm by paving the way for more seamless and efficient collaboration in Industry 4.0 by relying on socially adept cognitive architectures.
本文介绍了 DAC-HRC,这是一种新型认知架构,旨在优化工业环境中的人机协作(HRC),尤其是在工业 4.0 的背景下。该架构以分布式自适应控制理论以及联合意向性和相互依赖性原则为基础,这些原则是有效人机协作的关键。联合意向性指的是人类与机器人之间的共同目标和相互理解,而相互依赖则强调依靠彼此的能力来完成任务。DAC-HRC 被应用于一个混合回收工厂,用于拆卸和回收废弃电气和电子设备(WEEE)装置。该架构包含多个在不同时间尺度和抽象水平上运行的认知模块,可促进自适应协作,为每个人类用户提供个性化服务。DAC-HRC 的有效性通过几项试点研究得到了证明,展示了诸如轮流交互、个性化错误处理机制、自适应安全措施和基于手势的交流等功能。这些功能通过促进机器人实时适应人类需求和偏好,加强了回收工厂中的人机协作。DAC-HRC 体系结构旨在为工业 4.0 中更无缝、更高效的协作铺平道路,依靠善于社交的认知体系结构,为开发新的人机协作范例做出贡献。
{"title":"Socially adaptive cognitive architecture for human-robot collaboration in industrial settings","authors":"Ismael T. Freire, Oscar Guerrero-Rosado, A. F. Amil, P. Verschure","doi":"10.3389/frobt.2024.1248646","DOIUrl":"https://doi.org/10.3389/frobt.2024.1248646","url":null,"abstract":"This paper introduces DAC-HRC, a novel cognitive architecture designed to optimize human-robot collaboration (HRC) in industrial settings, particularly within the context of Industry 4.0. The architecture is grounded in the Distributed Adaptive Control theory and the principles of joint intentionality and interdependence, which are key to effective HRC. Joint intentionality refers to the shared goals and mutual understanding between a human and a robot, while interdependence emphasizes the reliance on each other’s capabilities to complete tasks. DAC-HRC is applied to a hybrid recycling plant for the disassembly and recycling of Waste Electrical and Electronic Equipment (WEEE) devices. The architecture incorporates several cognitive modules operating at different timescales and abstraction levels, fostering adaptive collaboration that is personalized to each human user. The effectiveness of DAC-HRC is demonstrated through several pilot studies, showcasing functionalities such as turn-taking interaction, personalized error-handling mechanisms, adaptive safety measures, and gesture-based communication. These features enhance human-robot collaboration in the recycling plant by promoting real-time robot adaptation to human needs and preferences. The DAC-HRC architecture aims to contribute to the development of a new HRC paradigm by paving the way for more seamless and efficient collaboration in Industry 4.0 by relying on socially adept cognitive architectures.","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141362969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Robotics and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1