首页 > 最新文献

Frontiers in Robotics and AI最新文献

英文 中文
Journey tracker: driver alerting system with a deep learning approach. 旅程跟踪器:采用深度学习方法的驾驶员警报系统。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-10-04 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1433795
N L Yashaswini, Vanishri Arun, B M Shashikala, Shyla Raj, H Y Vani, Francesco Flammini

Negligence of public transport drivers due to drowsiness poses risks not only to their own lives but also to the lives of passengers. The designed journey tracker system alerts the drivers and activates potential penalties. A custom EfficientNet model architecture, based on EfficientNet design principles, is built and trained using the Media Research Lab (MRL) eye dataset. Reflections in frames are filtered out to ensure accurate detections. A 10 min initial period is utilized to understand the driver's baseline behavior, enhancing the reliability of drowsiness detections. Input from drivers is considered to determine the frame rate for more precise real-time monitoring. Only the eye regions of individual drivers are captured to maintain privacy and ethical standards, fostering driver comfort. Hyperparameter tuning and testing of different activation functions during model training aim to strike a balance between model complexity, performance and computational cost. Obtained an accuracy rate of 95% and results demonstrate that the "swish" activation function outperforms ReLU, sigmoid and tanh activation functions in extracting hierarchical features. Additionally, models trained from scratch exhibit superior performance compared to pretrained models. This system promotes safer public transportation and enhances professionalism by monitoring driver alertness. The system detects closed eyes and performs a cross-reference using personalization data and pupil detection to trigger appropriate alerts and impose penalties.

公共交通司机因瞌睡而疏忽大意,不仅会危及自己的生命,还会危及乘客的生命。所设计的旅程跟踪系统可提醒司机并启动潜在的惩罚措施。根据 EfficientNet 设计原则,我们使用媒体研究实验室(MRL)的眼球数据集建立并训练了一个定制的 EfficientNet 模型架构。帧中的反射被过滤掉,以确保检测的准确性。利用 10 分钟的初始时间来了解驾驶员的基准行为,从而提高嗜睡检测的可靠性。考虑驾驶员的意见,确定帧频,以便进行更精确的实时监控。只捕捉单个驾驶员的眼部区域,以维护隐私和道德标准,提高驾驶员的舒适度。在模型训练过程中对不同激活函数进行超参数调整和测试,目的是在模型复杂性、性能和计算成本之间取得平衡。结果表明,"swish "激活函数在提取分层特征方面优于 ReLU、sigmoid 和 tanh 激活函数。此外,与预先训练的模型相比,从零开始训练的模型表现出更优越的性能。该系统通过监测驾驶员的警觉性,提高了公共交通的安全性并增强了专业性。该系统可检测闭眼情况,并利用个性化数据和瞳孔检测进行交叉比对,从而触发适当的警报并实施处罚。
{"title":"Journey tracker: driver alerting system with a deep learning approach.","authors":"N L Yashaswini, Vanishri Arun, B M Shashikala, Shyla Raj, H Y Vani, Francesco Flammini","doi":"10.3389/frobt.2024.1433795","DOIUrl":"10.3389/frobt.2024.1433795","url":null,"abstract":"<p><p>Negligence of public transport drivers due to drowsiness poses risks not only to their own lives but also to the lives of passengers. The designed journey tracker system alerts the drivers and activates potential penalties. A custom EfficientNet model architecture, based on EfficientNet design principles, is built and trained using the Media Research Lab (MRL) eye dataset. Reflections in frames are filtered out to ensure accurate detections. A 10 min initial period is utilized to understand the driver's baseline behavior, enhancing the reliability of drowsiness detections. Input from drivers is considered to determine the frame rate for more precise real-time monitoring. Only the eye regions of individual drivers are captured to maintain privacy and ethical standards, fostering driver comfort. Hyperparameter tuning and testing of different activation functions during model training aim to strike a balance between model complexity, performance and computational cost. Obtained an accuracy rate of 95% and results demonstrate that the \"swish\" activation function outperforms ReLU, sigmoid and tanh activation functions in extracting hierarchical features. Additionally, models trained from scratch exhibit superior performance compared to pretrained models. This system promotes safer public transportation and enhances professionalism by monitoring driver alertness. The system detects closed eyes and performs a cross-reference using personalization data and pupil detection to trigger appropriate alerts and impose penalties.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11487117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AAT4IRS: automated acceptance testing for industrial robotic systems. AAT4IRS:工业机器人系统自动验收测试。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-10-03 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1346580
Marcela G Dos Santos, Sylvain Hallé, Fabio Petrillo, Yann-Gaël Guéhéneuc

Industrial robotic systems (IRS) consist of industrial robots that automate industrial processes. They accurately perform repetitive tasks, replacing or assisting with dangerous jobs like assembly in the automotive and chemical industries. Failures in these systems can be catastrophic, so it is important to ensure their quality and safety before using them. One way to do this is by applying a software testing process to find faults before they become failures. However, software testing in industrial robotic systems has some challenges. These include differences in perspectives on software testing from people with diverse backgrounds, coordinating and collaborating with diverse teams, and performing software testing within the complex integration inherent in industrial environments. In traditional systems, a well-known development process uses simple, structured sentences in English to facilitate communication between project team members and business stakeholders. This process is called behavior-driven development (BDD), and one of its pillars is the use of templates to write user stories, scenarios, and automated acceptance tests. We propose a software testing (ST) approach called automated acceptance testing for industrial robotic systems (AAT4IRS) that uses natural language to write the features and scenarios to be tested. We evaluated our ST approach through a proof-of-concept, performing a pick-and-place process and applying mutation testing to measure its effectiveness. The results show that the test suites implemented using AAT4IRS were highly effective, with 79% of the generated mutants detected, thus instilling confidence in the robustness of our approach.

工业机器人系统(IRS)由实现工业流程自动化的工业机器人组成。它们能准确地执行重复性任务,替代或协助汽车和化工行业的装配等危险工作。这些系统的故障可能是灾难性的,因此在使用前必须确保其质量和安全。其中一种方法就是采用软件测试流程,在故障发生之前就将其找出来。然而,工业机器人系统的软件测试也面临一些挑战。其中包括来自不同背景的人员对软件测试的不同看法,与不同团队的协调与合作,以及在工业环境固有的复杂集成中执行软件测试。在传统系统中,一个著名的开发流程是使用简单、结构化的英语句子来促进项目团队成员和业务利益相关者之间的沟通。这一流程被称为行为驱动开发(BDD),其支柱之一是使用模板编写用户故事、场景和自动化验收测试。我们提出了一种名为工业机器人系统自动化验收测试(AAT4IRS)的软件测试(ST)方法,该方法使用自然语言编写要测试的功能和场景。我们通过概念验证评估了我们的 ST 方法,执行了拾取和放置流程,并应用突变测试来衡量其有效性。结果表明,使用 AAT4IRS 实施的测试套件非常有效,79% 的生成突变都被检测到,从而为我们方法的鲁棒性注入了信心。
{"title":"AAT4IRS: automated acceptance testing for industrial robotic systems.","authors":"Marcela G Dos Santos, Sylvain Hallé, Fabio Petrillo, Yann-Gaël Guéhéneuc","doi":"10.3389/frobt.2024.1346580","DOIUrl":"https://doi.org/10.3389/frobt.2024.1346580","url":null,"abstract":"<p><p>Industrial robotic systems (IRS) consist of industrial robots that automate industrial processes. They accurately perform repetitive tasks, replacing or assisting with dangerous jobs like assembly in the automotive and chemical industries. Failures in these systems can be catastrophic, so it is important to ensure their quality and safety before using them. One way to do this is by applying a software testing process to find faults before they become failures. However, software testing in industrial robotic systems has some challenges. These include differences in perspectives on software testing from people with diverse backgrounds, coordinating and collaborating with diverse teams, and performing software testing within the complex integration inherent in industrial environments. In traditional systems, a well-known development process uses simple, structured sentences in English to facilitate communication between project team members and business stakeholders. This process is called behavior-driven development (BDD), and one of its pillars is the use of templates to write user stories, scenarios, and automated acceptance tests. We propose a software testing (ST) approach called automated acceptance testing for industrial robotic systems (AAT4IRS) that uses natural language to write the features and scenarios to be tested. We evaluated our ST approach through a proof-of-concept, performing a pick-and-place process and applying mutation testing to measure its effectiveness. The results show that the test suites implemented using AAT4IRS were highly effective, with 79% of the generated mutants detected, thus instilling confidence in the robustness of our approach.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11484419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a computational model for higher orders of Theory of Mind in social agents. 为社会代理的高阶 "心智理论 "建立计算模型。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-10-02 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1468756
Federico Tavella, Federico Manzi, Samuele Vinanzi, Cinzia Di Dio, Davide Massaro, Angelo Cangelosi, Antonella Marchetti

Effective communication between humans and machines requires artificial tools to adopt a human-like social perspective. The Theory of Mind (ToM) enables understanding and predicting mental states and behaviours, crucial for social interactions from childhood through adulthood. Artificial agents with ToM skills can better coordinate actions, such as in warehouses or healthcare. Incorporating ToM in AI systems can revolutionise our interactions with intelligent machines. This proposal emphasises the current focus on first-order ToM models in the literature and investigates the potential of creating a computational model for higher-order ToM.

人类与机器之间的有效交流需要人工智能工具采用类似人类的社会视角。心智理论(ToM)能够理解和预测心理状态和行为,这对于从童年到成年的社会交往至关重要。具备心智理论技能的人工代理可以更好地协调行动,例如在仓库或医疗保健领域。将 ToM 纳入人工智能系统可以彻底改变我们与智能机器的互动。这项建议强调了目前文献中对一阶 ToM 模型的关注,并研究了创建高阶 ToM 计算模型的潜力。
{"title":"Towards a computational model for higher orders of Theory of Mind in social agents.","authors":"Federico Tavella, Federico Manzi, Samuele Vinanzi, Cinzia Di Dio, Davide Massaro, Angelo Cangelosi, Antonella Marchetti","doi":"10.3389/frobt.2024.1468756","DOIUrl":"https://doi.org/10.3389/frobt.2024.1468756","url":null,"abstract":"<p><p>Effective communication between humans and machines requires artificial tools to adopt a human-like social perspective. The Theory of Mind (ToM) enables understanding and predicting mental states and behaviours, crucial for social interactions from childhood through adulthood. Artificial agents with ToM skills can better coordinate actions, such as in warehouses or healthcare. Incorporating ToM in AI systems can revolutionise our interactions with intelligent machines. This proposal emphasises the current focus on first-order ToM models in the literature and investigates the potential of creating a computational model for higher-order ToM.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11479858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum: Optimal sway motion reduction in forestry cranes. 更正:减少林业起重机摇摆运动的最佳方法。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-10-01 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1491980
Elham Kowsari, Reza Ghabcheloo

[This corrects the article DOI: 10.3389/frobt.2024.1417741.].

[此处更正了文章 DOI:10.3389/frobt.2024.1417741]。
{"title":"Corrigendum: Optimal sway motion reduction in forestry cranes.","authors":"Elham Kowsari, Reza Ghabcheloo","doi":"10.3389/frobt.2024.1491980","DOIUrl":"https://doi.org/10.3389/frobt.2024.1491980","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frobt.2024.1417741.].</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11474007/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging vision and touch: advancing robotic interaction prediction with self-supervised multimodal learning. 连接视觉与触觉:利用自监督多模态学习推进机器人交互预测。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-09-30 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1407519
Luchen Li, Thomas George Thuruthel

Predicting the consequences of the agent's actions on its environment is a pivotal challenge in robotic learning, which plays a key role in developing higher cognitive skills for intelligent robots. While current methods have predominantly relied on vision and motion data to generate the predicted videos, more comprehensive sensory perception is required for complex physical interactions such as contact-rich manipulation or highly dynamic tasks. In this work, we investigate the interdependence between vision and tactile sensation in the scenario of dynamic robotic interaction. A multi-modal fusion mechanism is introduced to the action-conditioned video prediction model to forecast future scenes, which enriches the single-modality prototype with a compressed latent representation of multiple sensory inputs. Additionally, to accomplish the interactive setting, we built a robotic interaction system that is equipped with both web cameras and vision-based tactile sensors to collect the dataset of vision-tactile sequences and the corresponding robot action data. Finally, through a series of qualitative and quantitative comparative study of different prediction architecture and tasks, we present insightful analysis of the cross-modality influence between vision, tactile and action, revealing the asymmetrical impact that exists between the sensations when contributing to interpreting the environment information. This opens possibilities for more adaptive and efficient robotic control in complex environments, with implications for dexterous manipulation and human-robot interaction.

预测机器人行动对环境造成的后果是机器人学习中的一个关键挑战,这对开发智能机器人的高级认知技能起着关键作用。虽然目前的方法主要依赖视觉和运动数据来生成预测视频,但对于复杂的物理交互,如接触丰富的操作或高度动态的任务,需要更全面的感官感知。在这项工作中,我们研究了动态机器人交互场景中视觉和触觉之间的相互依存关系。为了预测未来场景,我们在动作条件视频预测模型中引入了多模态融合机制,利用多种感官输入的压缩潜表征来丰富单模态原型。此外,为了完成交互设置,我们建立了一个机器人交互系统,该系统配备了网络摄像头和基于视觉的触觉传感器,用于收集视觉-触觉序列数据集和相应的机器人动作数据。最后,通过对不同的预测架构和任务进行一系列定性和定量比较研究,我们对视觉、触觉和动作之间的跨模态影响进行了深入分析,揭示了各种感觉在解释环境信息时存在的不对称影响。这为在复杂环境中实现更具适应性和更高效的机器人控制提供了可能性,并对灵巧操纵和人机交互产生了影响。
{"title":"Bridging vision and touch: advancing robotic interaction prediction with self-supervised multimodal learning.","authors":"Luchen Li, Thomas George Thuruthel","doi":"10.3389/frobt.2024.1407519","DOIUrl":"https://doi.org/10.3389/frobt.2024.1407519","url":null,"abstract":"<p><p>Predicting the consequences of the agent's actions on its environment is a pivotal challenge in robotic learning, which plays a key role in developing higher cognitive skills for intelligent robots. While current methods have predominantly relied on vision and motion data to generate the predicted videos, more comprehensive sensory perception is required for complex physical interactions such as contact-rich manipulation or highly dynamic tasks. In this work, we investigate the interdependence between vision and tactile sensation in the scenario of dynamic robotic interaction. A multi-modal fusion mechanism is introduced to the action-conditioned video prediction model to forecast future scenes, which enriches the single-modality prototype with a compressed latent representation of multiple sensory inputs. Additionally, to accomplish the interactive setting, we built a robotic interaction system that is equipped with both web cameras and vision-based tactile sensors to collect the dataset of vision-tactile sequences and the corresponding robot action data. Finally, through a series of qualitative and quantitative comparative study of different prediction architecture and tasks, we present insightful analysis of the cross-modality influence between vision, tactile and action, revealing the asymmetrical impact that exists between the sensations when contributing to interpreting the environment information. This opens possibilities for more adaptive and efficient robotic control in complex environments, with implications for dexterous manipulation and human-robot interaction.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Siamese and triplet network-based pain expression in robotic avatars for care and nursing training. 用于护理和护理培训的机器人化身中基于连体和三体网络的疼痛表达。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-09-26 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1419584
Miran Lee, Minjeong Lee, Suyeong Kim

Care and nursing training (CNT) refers to developing the ability to effectively respond to patient needs by investigating their requests and improving trainees' care skills in a caring environment. Although conventional CNT programs have been conducted based on videos, books, and role-playing, the best approach is to practice on a real human. However, it is challenging to recruit patients for continuous training, and the patients may experience fatigue or boredom with iterative testing. As an alternative approach, a patient robot that reproduces various human diseases and provides feedback to trainees has been introduced. This study presents a patient robot that can express feelings of pain, similarly to a real human, in joint care education. The two primary objectives of the proposed patient robot-based care training system are (a) to infer the pain felt by the patient robot and intuitively provide the trainee with the patient's pain state, and (b) to provide facial expression-based visual feedback of the patient robot for care training.

护理培训(CNT)是指通过调查病人的要求,培养有效应对病人需求的能力,并在充满关爱的环境中提高受训者的护理技能。虽然传统的 CNT 课程是基于视频、书籍和角色扮演进行的,但最好的方法是在真人身上进行练习。然而,招募病人进行连续训练具有挑战性,而且病人可能会对反复测试感到疲劳或厌倦。作为一种替代方法,一种能再现各种人类疾病并向受训者提供反馈的病人机器人已经问世。本研究介绍了一种能表达疼痛感觉的病人机器人,与真人类似,用于关节护理教育。拟议的基于病人机器人的护理培训系统的两个主要目标是:(a)推断病人机器人感受到的疼痛,并直观地向受训者提供病人的疼痛状态;(b)为护理培训提供基于面部表情的病人机器人视觉反馈。
{"title":"Siamese and triplet network-based pain expression in robotic avatars for care and nursing training.","authors":"Miran Lee, Minjeong Lee, Suyeong Kim","doi":"10.3389/frobt.2024.1419584","DOIUrl":"10.3389/frobt.2024.1419584","url":null,"abstract":"<p><p>Care and nursing training (CNT) refers to developing the ability to effectively respond to patient needs by investigating their requests and improving trainees' care skills in a caring environment. Although conventional CNT programs have been conducted based on videos, books, and role-playing, the best approach is to practice on a real human. However, it is challenging to recruit patients for continuous training, and the patients may experience fatigue or boredom with iterative testing. As an alternative approach, a patient robot that reproduces various human diseases and provides feedback to trainees has been introduced. This study presents a patient robot that can express feelings of pain, similarly to a real human, in joint care education. The two primary objectives of the proposed patient robot-based care training system are (a) to infer the pain felt by the patient robot and intuitively provide the trainee with the patient's pain state, and (b) to provide facial expression-based visual feedback of the patient robot for care training.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464974/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validations of various in-hand object manipulation strategies employing a novel tactile sensor developed for an under-actuated robot hand. 利用为欠驱动机械手开发的新型触觉传感器,验证各种手持物体操作策略。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-09-26 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1460589
Avinash Singh, Massimilano Pinto, Petros Kaltsas, Salvatore Pirozzi, Shifa Sulaiman, Fanny Ficuciello

Prisma Hand II is an under-actuated prosthetic hand developed at the University of Naples, Federico II to study in-hand manipulations during grasping activities. 3 motors equipped on the robotic hand drive 19 joints using elastic tendons. The operations of the hand are achieved by combining tactile hand sensing with under-actuation capabilities. The hand has the potential to be employed in both industrial and prosthetic applications due to its dexterous motion capabilities. However, currently there are no commercially available tactile sensors with compatible dimensions suitable for the prosthetic hand. Hence, in this work, we develop a novel tactile sensor designed based on an opto-electronic technology for the Prisma Hand II. The optimised dimensions of the proposed sensor made it possible to be integrated with the fingertips of the prosthetic hand. The output voltage obtained from the novel tactile sensor is used to determine optimum grasping forces and torques during in-hand manipulation tasks employing Neural Networks (NNs). The grasping force values obtained using a Convolutional Neural Network (CNN) and an Artificial Neural Network (ANN) are compared based on Mean Square Error (MSE) values to find out a better training network for the tasks. The tactile sensing capabilities of the proposed novel sensing method are presented and compared in simulation studies and experimental validations using various hand manipulation tasks. The developed tactile sensor is found to be showcasing a better performance compared to previous version of the sensor used in the hand.

Prisma Hand II 是那不勒斯费德里科二世大学开发的一种欠驱动假手,用于研究抓握活动中的手部操作。机械手上装有 3 个电机,利用弹性腱驱动 19 个关节。通过将手部触觉传感与欠驱动能力相结合,实现了手部操作。由于具有灵巧的运动能力,该机械手有可能应用于工业和假肢领域。然而,目前市场上还没有适合假手的尺寸兼容的触觉传感器。因此,在这项工作中,我们为 Prisma Hand II 开发了一种基于光电技术设计的新型触觉传感器。该传感器的尺寸经过优化,可以与假手的指尖集成在一起。新型触觉传感器获得的输出电压可用于确定在使用神经网络(NN)进行手部操作任务时的最佳抓取力和扭矩。根据平均平方误差 (MSE) 值,对使用卷积神经网络 (CNN) 和人工神经网络 (ANN) 获得的抓取力值进行比较,以找出更适合任务的训练网络。在模拟研究和使用各种手部操作任务的实验验证中,介绍并比较了所提出的新型传感方法的触觉传感能力。结果发现,与之前用于手部的传感器相比,所开发的触觉传感器具有更好的性能。
{"title":"Validations of various in-hand object manipulation strategies employing a novel tactile sensor developed for an under-actuated robot hand.","authors":"Avinash Singh, Massimilano Pinto, Petros Kaltsas, Salvatore Pirozzi, Shifa Sulaiman, Fanny Ficuciello","doi":"10.3389/frobt.2024.1460589","DOIUrl":"10.3389/frobt.2024.1460589","url":null,"abstract":"<p><p>Prisma Hand II is an under-actuated prosthetic hand developed at the University of Naples, Federico II to study in-hand manipulations during grasping activities. 3 motors equipped on the robotic hand drive 19 joints using elastic tendons. The operations of the hand are achieved by combining tactile hand sensing with under-actuation capabilities. The hand has the potential to be employed in both industrial and prosthetic applications due to its dexterous motion capabilities. However, currently there are no commercially available tactile sensors with compatible dimensions suitable for the prosthetic hand. Hence, in this work, we develop a novel tactile sensor designed based on an opto-electronic technology for the Prisma Hand II. The optimised dimensions of the proposed sensor made it possible to be integrated with the fingertips of the prosthetic hand. The output voltage obtained from the novel tactile sensor is used to determine optimum grasping forces and torques during in-hand manipulation tasks employing Neural Networks (NNs). The grasping force values obtained using a Convolutional Neural Network (CNN) and an Artificial Neural Network (ANN) are compared based on Mean Square Error (MSE) values to find out a better training network for the tasks. The tactile sensing capabilities of the proposed novel sensing method are presented and compared in simulation studies and experimental validations using various hand manipulation tasks. The developed tactile sensor is found to be showcasing a better performance compared to previous version of the sensor used in the hand.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464259/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ExTraCT - Explainable trajectory corrections for language-based human-robot interaction using textual feature descriptions. ExTraCT - 利用文本特征描述对基于语言的人机交互进行可解释的轨迹修正。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-09-23 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1345693
J-Anne Yow, Neha Priyadarshini Garg, Manoj Ramanathan, Wei Tech Ang

Introduction: In human-robot interaction (HRI), understanding human intent is crucial for robots to perform tasks that align with user preferences. Traditional methods that aim to modify robot trajectories based on language corrections often require extensive training to generalize across diverse objects, initial trajectories, and scenarios. This work presents ExTraCT, a modular framework designed to modify robot trajectories (and behaviour) using natural language input.

Methods: Unlike traditional end-to-end learning approaches, ExTraCT separates language understanding from trajectory modification, allowing robots to adapt language corrections to new tasks-including those with complex motions like scooping-as well as various initial trajectories and object configurations without additional end-to-end training. ExTraCT leverages Large Language Models (LLMs) to semantically match language corrections to predefined trajectory modification functions, allowing the robot to make necessary adjustments to its path. This modular approach overcomes the limitations of pre-trained datasets and offers versatility across various applications.

Results: Comprehensive user studies conducted in simulation and with a physical robot arm demonstrated that ExTraCT's trajectory corrections are more accurate and preferred by users in 80% of cases compared to the baseline.

Discussion: ExTraCT offers a more explainable approach to understanding language corrections, which could facilitate learning human preferences. We also demonstrated the adaptability and effectiveness of ExTraCT in a complex scenarios like assistive feeding, presenting it as a versatile solution across various HRI applications.

前言在人机交互(HRI)中,理解人类意图对于机器人执行符合用户偏好的任务至关重要。基于语言修正来修改机器人轨迹的传统方法往往需要大量的训练,才能在不同的对象、初始轨迹和场景中通用。本研究提出的 ExTraCT 是一个模块化框架,旨在利用自然语言输入修改机器人轨迹(和行为):与传统的端到端学习方法不同,ExTraCT 将语言理解与轨迹修改分离开来,使机器人能够根据新任务(包括像铲子这样的复杂动作)以及各种初始轨迹和物体配置进行语言修正,而无需额外的端到端训练。ExTraCT 利用大型语言模型 (LLM) 将语言修正与预定义的轨迹修正功能进行语义匹配,使机器人能够对其路径进行必要的调整。这种模块化方法克服了预训练数据集的局限性,为各种应用提供了多功能性:结果:通过模拟和实体机械臂进行的全面用户研究表明,与基线相比,ExTraCT 的轨迹修正更准确,在 80% 的情况下用户更喜欢使用 ExTraCT:ExTraCT为理解语言修正提供了一种更易解释的方法,有助于学习人类的偏好。我们还展示了 ExTraCT 在辅助喂养等复杂场景中的适应性和有效性,使其成为一种适用于各种 HRI 应用的通用解决方案。
{"title":"ExTraCT - Explainable trajectory corrections for language-based human-robot interaction using textual feature descriptions.","authors":"J-Anne Yow, Neha Priyadarshini Garg, Manoj Ramanathan, Wei Tech Ang","doi":"10.3389/frobt.2024.1345693","DOIUrl":"https://doi.org/10.3389/frobt.2024.1345693","url":null,"abstract":"<p><strong>Introduction: </strong>In human-robot interaction (HRI), understanding human intent is crucial for robots to perform tasks that align with user preferences. Traditional methods that aim to modify robot trajectories based on language corrections often require extensive training to generalize across diverse objects, initial trajectories, and scenarios. This work presents ExTraCT, a modular framework designed to modify robot trajectories (and behaviour) using natural language input.</p><p><strong>Methods: </strong>Unlike traditional end-to-end learning approaches, ExTraCT separates language understanding from trajectory modification, allowing robots to adapt language corrections to new tasks-including those with complex motions like scooping-as well as various initial trajectories and object configurations without additional end-to-end training. ExTraCT leverages Large Language Models (LLMs) to semantically match language corrections to predefined trajectory modification functions, allowing the robot to make necessary adjustments to its path. This modular approach overcomes the limitations of pre-trained datasets and offers versatility across various applications.</p><p><strong>Results: </strong>Comprehensive user studies conducted in simulation and with a physical robot arm demonstrated that ExTraCT's trajectory corrections are more accurate and preferred by users in 80% of cases compared to the baseline.</p><p><strong>Discussion: </strong>ExTraCT offers a more explainable approach to understanding language corrections, which could facilitate learning human preferences. We also demonstrated the adaptability and effectiveness of ExTraCT in a complex scenarios like assistive feeding, presenting it as a versatile solution across various HRI applications.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11456793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmenting perceived stickiness of physical objects through tactile feedback after finger lift-off. 通过手指抬起后的触觉反馈增强对实物粘性的感知。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-09-18 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1415464
Tadatoshi Kurogi, Yuki Inoue, Takeshi Fujiwara, Kouta Minamizawa

Haptic Augmented Reality (HAR) is a method that actively modulates the perceived haptics of physical objects by presenting additional haptic feedback using a haptic display. However, most of the proposed HAR research focuses on modifying the hardness, softness, roughness, smoothness, friction, and surface shape of physical objects. In this paper, we propose an approach to augment the perceived stickiness of a physical object by presenting additional tactile feedback at a particular time after the finger lifts off from the physical object using a thin and soft tactile display suitable for HAR. To demonstrate this concept, we constructed a thin and soft tactile display using a Dielectric Elastomer Actuator suitable for HAR. We then conducted two experiments to validate the effectiveness of the proposed approach. In Experiment 1, we showed that the developed tactile display can augment the perceived stickiness of physical objects by presenting additional tactile feedback at appropriate times. In Experiment 2, we investigated the stickiness experience obtained by our proposed approach and showed that the realism of the stickiness experience and the harmony between the physical object and the additional tactile feedback are affected by the frequency and presentation timing of the tactile feedback. Our proposed approach is expected to contribute to the development of new applications not only in HAR, but also in Virtual Reality, Mixed Reality, and other domains using haptic displays.

触觉增强现实(HAR)是一种通过使用触觉显示器提供额外的触觉反馈来主动调节物理对象感知触觉的方法。然而,大多数关于触觉增强现实的研究都集中在修改物理对象的硬度、柔软度、粗糙度、光滑度、摩擦力和表面形状上。在本文中,我们提出了一种增强物理对象感知粘性的方法,即在手指离开物理对象后的特定时间,使用适合 HAR 的薄而软的触觉显示器来提供额外的触觉反馈。为了证明这一概念,我们使用适合 HAR 的介电弹性体致动器构建了一个轻薄柔软的触觉显示器。然后,我们进行了两项实验来验证所提方法的有效性。在实验 1 中,我们证明了所开发的触觉显示器可以通过在适当的时候提供额外的触觉反馈来增强对实物粘性的感知。在实验 2 中,我们研究了通过我们提出的方法所获得的粘性体验,结果表明,粘性体验的真实性以及实物与附加触觉反馈之间的协调性会受到触觉反馈的频率和呈现时机的影响。我们提出的方法不仅有助于开发HAR领域的新应用,还有助于开发虚拟现实、混合现实和其他领域的触觉显示器。
{"title":"Augmenting perceived stickiness of physical objects through tactile feedback after finger lift-off.","authors":"Tadatoshi Kurogi, Yuki Inoue, Takeshi Fujiwara, Kouta Minamizawa","doi":"10.3389/frobt.2024.1415464","DOIUrl":"10.3389/frobt.2024.1415464","url":null,"abstract":"<p><p>Haptic Augmented Reality (HAR) is a method that actively modulates the perceived haptics of physical objects by presenting additional haptic feedback using a haptic display. However, most of the proposed HAR research focuses on modifying the hardness, softness, roughness, smoothness, friction, and surface shape of physical objects. In this paper, we propose an approach to augment the perceived stickiness of a physical object by presenting additional tactile feedback at a particular time after the finger lifts off from the physical object using a thin and soft tactile display suitable for HAR. To demonstrate this concept, we constructed a thin and soft tactile display using a Dielectric Elastomer Actuator suitable for HAR. We then conducted two experiments to validate the effectiveness of the proposed approach. In Experiment 1, we showed that the developed tactile display can augment the perceived stickiness of physical objects by presenting additional tactile feedback at appropriate times. In Experiment 2, we investigated the stickiness experience obtained by our proposed approach and showed that the realism of the stickiness experience and the harmony between the physical object and the additional tactile feedback are affected by the frequency and presentation timing of the tactile feedback. Our proposed approach is expected to contribute to the development of new applications not only in HAR, but also in Virtual Reality, Mixed Reality, and other domains using haptic displays.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11446170/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a bionic hexapod robot with adaptive gait and clearance for enhanced agricultural field scouting. 开发具有自适应步态和间隙的仿生六足机器人,以增强农田侦察能力。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-09-18 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1426269
Zhenghua Zhang, Weilong He, Fan Wu, Lina Quesada, Lirong Xiang

High agility, maneuverability, and payload capacity, combined with small footprints, make legged robots well-suited for precision agriculture applications. In this study, we introduce a novel bionic hexapod robot designed for agricultural applications to address the limitations of traditional wheeled and aerial robots. The robot features a terrain-adaptive gait and adjustable clearance to ensure stability and robustness over various terrains and obstacles. Equipped with a high-precision Inertial Measurement Unit (IMU), the robot is able to monitor its attitude in real time to maintain balance. To enhance obstacle detection and self-navigation capabilities, we have designed an advanced version of the robot equipped with an optional advanced sensing system. This advanced version includes LiDAR, stereo cameras, and distance sensors to enable obstacle detection and self-navigation capabilities. We have tested the standard version of the robot under different ground conditions, including hard concrete floors, rugged grass, slopes, and uneven field with obstacles. The robot maintains good stability with pitch angle fluctuations ranging from -11.5° to 8.6° in all conditions and can walk on slopes with gradients up to 17°. These trials demonstrated the robot's adaptability to complex field environments and validated its ability to maintain stability and efficiency. In addition, the terrain-adaptive algorithm is more energy efficient than traditional obstacle avoidance algorithms, reducing energy consumption by 14.4% for each obstacle crossed. Combined with its flexible and lightweight design, our robot shows significant potential in improving agricultural practices by increasing efficiency, lowering labor costs, and enhancing sustainability. In our future work, we will further develop the robot's energy efficiency, durability in various environmental conditions, and compatibility with different crops and farming methods.

高灵活性、机动性和有效载荷能力,再加上占地面积小,使腿部机器人非常适合精准农业应用。在本研究中,我们介绍了一种专为农业应用设计的新型仿生六足机器人,以解决传统轮式机器人和空中机器人的局限性。该机器人具有地形适应性步态和可调间隙,可确保在各种地形和障碍物上的稳定性和鲁棒性。该机器人配备了高精度惯性测量单元(IMU),能够实时监测其姿态以保持平衡。为了增强障碍物探测和自导航能力,我们设计了一种配备可选高级传感系统的高级版机器人。这种高级版本包括激光雷达、立体摄像机和距离传感器,可实现障碍物探测和自导航功能。我们在不同的地面条件下对标准版机器人进行了测试,包括坚硬的混凝土地面、崎岖的草地、斜坡和有障碍物的不平整场地。在所有条件下,机器人都能保持良好的稳定性,俯仰角波动范围从-11.5°到8.6°不等,并能在坡度高达17°的斜坡上行走。这些试验证明了机器人对复杂野外环境的适应能力,并验证了其保持稳定性和效率的能力。此外,与传统的避障算法相比,地形适应算法更加节能,每越过一个障碍物,能耗可降低 14.4%。结合其灵活轻便的设计,我们的机器人在提高效率、降低劳动力成本和增强可持续性等方面显示出改善农业实践的巨大潜力。在未来的工作中,我们将进一步提高机器人的能效、在各种环境条件下的耐用性以及与不同作物和耕作方法的兼容性。
{"title":"Development of a bionic hexapod robot with adaptive gait and clearance for enhanced agricultural field scouting.","authors":"Zhenghua Zhang, Weilong He, Fan Wu, Lina Quesada, Lirong Xiang","doi":"10.3389/frobt.2024.1426269","DOIUrl":"10.3389/frobt.2024.1426269","url":null,"abstract":"<p><p>High agility, maneuverability, and payload capacity, combined with small footprints, make legged robots well-suited for precision agriculture applications. In this study, we introduce a novel bionic hexapod robot designed for agricultural applications to address the limitations of traditional wheeled and aerial robots. The robot features a terrain-adaptive gait and adjustable clearance to ensure stability and robustness over various terrains and obstacles. Equipped with a high-precision Inertial Measurement Unit (IMU), the robot is able to monitor its attitude in real time to maintain balance. To enhance obstacle detection and self-navigation capabilities, we have designed an advanced version of the robot equipped with an optional advanced sensing system. This advanced version includes LiDAR, stereo cameras, and distance sensors to enable obstacle detection and self-navigation capabilities. We have tested the standard version of the robot under different ground conditions, including hard concrete floors, rugged grass, slopes, and uneven field with obstacles. The robot maintains good stability with pitch angle fluctuations ranging from -11.5° to 8.6° in all conditions and can walk on slopes with gradients up to 17°. These trials demonstrated the robot's adaptability to complex field environments and validated its ability to maintain stability and efficiency. In addition, the terrain-adaptive algorithm is more energy efficient than traditional obstacle avoidance algorithms, reducing energy consumption by 14.4% for each obstacle crossed. Combined with its flexible and lightweight design, our robot shows significant potential in improving agricultural practices by increasing efficiency, lowering labor costs, and enhancing sustainability. In our future work, we will further develop the robot's energy efficiency, durability in various environmental conditions, and compatibility with different crops and farming methods.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11444934/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Robotics and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1