Pub Date : 2025-10-30eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1589025
Davide Picchi, Sigrid Brell-Çokcan
Mini cranes play a pivotal role in construction due to their versatility across numerous scenarios. Recent advancements in Reinforcement Learning (RL) have enabled agents to operate cranes in virtual environments for predetermined tasks, paving the way for future real-world deployment. Traditionally, most RL agents use a squashed Gaussian distribution to select actions. In this study, we investigate a mini-crane scenario that could potentially be fully automated by AI and explore replacing the Gaussian distribution with the Kumaraswamy distribution, a close relative of the Beta distribution, for action stochastic selection. Our results indicate that the Kumaraswamy distribution offers computational advantages while maintaining robust performance, making it an attractive alternative for RL applications in continuous control applications.
{"title":"Exploiting the Kumaraswamy distribution in a reinforcement learning context.","authors":"Davide Picchi, Sigrid Brell-Çokcan","doi":"10.3389/frobt.2025.1589025","DOIUrl":"10.3389/frobt.2025.1589025","url":null,"abstract":"<p><p>Mini cranes play a pivotal role in construction due to their versatility across numerous scenarios. Recent advancements in Reinforcement Learning (RL) have enabled agents to operate cranes in virtual environments for predetermined tasks, paving the way for future real-world deployment. Traditionally, most RL agents use a squashed Gaussian distribution to select actions. In this study, we investigate a mini-crane scenario that could potentially be fully automated by AI and explore replacing the Gaussian distribution with the Kumaraswamy distribution, a close relative of the Beta distribution, for action stochastic selection. Our results indicate that the Kumaraswamy distribution offers computational advantages while maintaining robust performance, making it an attractive alternative for RL applications in continuous control applications.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1589025"},"PeriodicalIF":3.0,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12611641/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1660691
Junya Yamamoto, Kenji Tahara, Takahiro Wada
In response to the growing need for flexibility in handling complex tasks, research on human-robot collaboration (HRC) has garnered considerable attention. Recent studies on HRC have achieved smooth handover tasks between humans and robots by adaptively responding to human states. Collaboration was further improved by conveying the state of the robot to humans via robotic interactive motion cues. However, in scenarios such as collaborative assembly tasks that require precise positioning, methods relying on motion or forces caused by interactions through the shared object compromise both task accuracy and smoothness, and are therefore not directly applicable. To address this, the present study proposes a method to convey the stiffness of the robot to a human arm during collaborative human-robot assembly tasks in a manner that does not affect the shared object or task, aiming to enhance efficiency and reduce human workload. Sixteen participants performed a collaborative assembly task with a robot, which involved unscrewing, repositioning, and reattaching a part while the robot held and adjusted the position of the part. The experiment examined the effectiveness of the proposed method, in which the robot's stiffness was communicated to a participant's forearm. The independent variable, tested within-subjects, was the stiffness presentation method, with three levels: without the proposed method (no presentation) and with the proposed method (real-time and predictive presentations). The results demonstrated that the proposed method enhanced task efficiency by shortening task completion time, which was associated with lower subjective workload scores.
{"title":"Effect of presenting robot hand stiffness to human arm on human-robot collaborative assembly tasks.","authors":"Junya Yamamoto, Kenji Tahara, Takahiro Wada","doi":"10.3389/frobt.2025.1660691","DOIUrl":"10.3389/frobt.2025.1660691","url":null,"abstract":"<p><p>In response to the growing need for flexibility in handling complex tasks, research on human-robot collaboration (HRC) has garnered considerable attention. Recent studies on HRC have achieved smooth handover tasks between humans and robots by adaptively responding to human states. Collaboration was further improved by conveying the state of the robot to humans via robotic interactive motion cues. However, in scenarios such as collaborative assembly tasks that require precise positioning, methods relying on motion or forces caused by interactions through the shared object compromise both task accuracy and smoothness, and are therefore not directly applicable. To address this, the present study proposes a method to convey the stiffness of the robot to a human arm during collaborative human-robot assembly tasks in a manner that does not affect the shared object or task, aiming to enhance efficiency and reduce human workload. Sixteen participants performed a collaborative assembly task with a robot, which involved unscrewing, repositioning, and reattaching a part while the robot held and adjusted the position of the part. The experiment examined the effectiveness of the proposed method, in which the robot's stiffness was communicated to a participant's forearm. The independent variable, tested within-subjects, was the stiffness presentation method, with three levels: without the proposed method (no presentation) and with the proposed method (real-time and predictive presentations). The results demonstrated that the proposed method enhanced task efficiency by shortening task completion time, which was associated with lower subjective workload scores.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1660691"},"PeriodicalIF":3.0,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12611644/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1594529
Silvia Filogna, Giovanni Arras, Tommaso Turchi, Giuseppe Prencipe, Elena Beani, Clara Bombonato, Francesca Fedeli, Gemma D'Alessandro, Antea Scrocco, Giuseppina Sgandurra
Despite the growing interest in Artificial Intelligence (AI) for pediatric rehabilitation, family engagement in the technologies design remains limited. Understanding how AI-driven tools align with family needs, caregiving routines, and ethical concerns is crucial for their successful adoption. In this study, we actively involved nine families of children with Cerebral Palsy (CP) in an online participatory design workshop, underscoring both the feasibility and the need of integrating family's perspectives into AI development. Families enthusiastically participated, not only sharing insights but also appreciating the opportunity to contribute to shaping future technologies. Their active engagement challenges the assumption that co-design with families is complex or impractical, highlighting how structured yet flexible methodologies can make such crucial initiatives highly effective. The online format further facilitated participation, allowing families to join the discussion and ensuring a diverse range of perspectives. The workshop's key findings reveal three core priorities for families: 1. AI should adapt to daily caregiving routines rather than impose rigid structures; 2. digital tools should enhance communication and collaboration between families and clinicians, rather than replace human interaction; and 3. AI-driven systems could empower children's autonomy while maintaining parental oversight. Additionally, families raised critical concerns about data privacy, transparency, and the need to preserve empathy in AI-mediated care. Our findings reinforce the urgent need to shift toward family-centered AI design, moving beyond purely technological solutions toward ethically responsible, inclusive innovations. This research not only demonstrates the possibility and success of engaging families in co-design processes but also provides a model for future AI development that genuinely reflects the lived experiences of children and caregivers.
{"title":"Pathways to family-centered healthcare: co-designing AI solutions with families in pediatric rehabilitation.","authors":"Silvia Filogna, Giovanni Arras, Tommaso Turchi, Giuseppe Prencipe, Elena Beani, Clara Bombonato, Francesca Fedeli, Gemma D'Alessandro, Antea Scrocco, Giuseppina Sgandurra","doi":"10.3389/frobt.2025.1594529","DOIUrl":"10.3389/frobt.2025.1594529","url":null,"abstract":"<p><p>Despite the growing interest in Artificial Intelligence (AI) for pediatric rehabilitation, family engagement in the technologies design remains limited. Understanding how AI-driven tools align with family needs, caregiving routines, and ethical concerns is crucial for their successful adoption. In this study, we actively involved nine families of children with Cerebral Palsy (CP) in an online participatory design workshop, underscoring both the feasibility and the need of integrating family's perspectives into AI development. Families enthusiastically participated, not only sharing insights but also appreciating the opportunity to contribute to shaping future technologies. Their active engagement challenges the assumption that co-design with families is complex or impractical, highlighting how structured yet flexible methodologies can make such crucial initiatives highly effective. The online format further facilitated participation, allowing families to join the discussion and ensuring a diverse range of perspectives. The workshop's key findings reveal three core priorities for families: 1. AI should adapt to daily caregiving routines rather than impose rigid structures; 2. digital tools should enhance communication and collaboration between families and clinicians, rather than replace human interaction; and 3. AI-driven systems could empower children's autonomy while maintaining parental oversight. Additionally, families raised critical concerns about data privacy, transparency, and the need to preserve empathy in AI-mediated care. Our findings reinforce the urgent need to shift toward family-centered AI design, moving beyond purely technological solutions toward ethically responsible, inclusive innovations. This research not only demonstrates the possibility and success of engaging families in co-design processes but also provides a model for future AI development that genuinely reflects the lived experiences of children and caregivers.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1594529"},"PeriodicalIF":3.0,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12611681/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: NeuroDesign in human-robot interaction: the making of engaging HRI technology your brain can't resist.","authors":"Ker-Jiun Wang, Ramana Vinjamuri, Maryam Alimardani, Tharun Kumar Reddy, Zhi-Hong Mao","doi":"10.3389/frobt.2025.1699371","DOIUrl":"https://doi.org/10.3389/frobt.2025.1699371","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1699371"},"PeriodicalIF":3.0,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12611685/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1682031
Michaela Kümpel, Manuel Scheibl, Jan-Philipp Töberg, Vanessa Hassouna, Philipp Cimiano, Britta Wrede, Michael Beetz
This paper addresses the challenge of enabling robots to autonomously prepare meals by bridging natural language recipe instructions and robotic action execution. We propose a novel methodology leveraging Actionable Knowledge Graphs to map recipe instructions into six core categories of robotic manipulation tasks, termed Action Cores cutting, pouring, mixing, preparing, pick and place, and cook and cool. Each AC is subdivided into Action Groups which represent a specific motion parameterization required for task execution. Using the Recipe1M + dataset (Marín et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43, 187-203), encompassing over one million recipes, we systematically analysed action verbs and matched them to ACs by using direct matching and cosine similarity, achieving a coverage of 76.5%. For the unmatched verbs, we employ a neuro-symbolic approach, matching verbs to existing AGs or generating new action cores utilizing a Large Language Model Our findings highlight the versatility of AKGs in adapting general plans to specific robotic tasks, validated through an experimental application in a meal preparation scenario. This work sets a foundation for adaptive robotic systems capable of performing a wide array of complex culinary tasks with minimal human intervention.
本文通过连接自然语言食谱指令和机器人动作执行,解决了使机器人能够自主准备饭菜的挑战。我们提出了一种新的方法,利用可操作知识图将配方指令映射到机器人操作任务的六个核心类别,称为动作核心切割,浇注,混合,准备,采摘和放置以及烹饪和冷却。每个AC被细分为动作组,动作组代表任务执行所需的特定动作参数化。使用Recipe1M +数据集(Marín et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43, 187-203),包含超过100万个食谱,我们系统地分析了动作动词,并通过直接匹配和余cosine相似度将它们与ACs进行匹配,达到了76.5%的覆盖率。对于不匹配的动词,我们采用了一种神经符号方法,将动词与现有的AGs进行匹配,或者利用大型语言模型生成新的动作核心。我们的研究结果强调了akg在适应特定机器人任务的总体计划方面的多功能性,并通过在饭菜准备场景中的实验应用得到了验证。这项工作为自适应机器人系统奠定了基础,该系统能够在最少的人为干预下执行各种复杂的烹饪任务。
{"title":"Everything robots need to know about cooking actions: creating actionable knowledge graphs to support robotic meal preparation.","authors":"Michaela Kümpel, Manuel Scheibl, Jan-Philipp Töberg, Vanessa Hassouna, Philipp Cimiano, Britta Wrede, Michael Beetz","doi":"10.3389/frobt.2025.1682031","DOIUrl":"10.3389/frobt.2025.1682031","url":null,"abstract":"<p><p>This paper addresses the challenge of enabling robots to autonomously prepare meals by bridging natural language recipe instructions and robotic action execution. We propose a novel methodology leveraging Actionable Knowledge Graphs to map recipe instructions into six core categories of robotic manipulation tasks, termed Action Cores cutting, pouring, mixing, preparing, pick and place, and cook and cool. Each AC is subdivided into Action Groups which represent a specific motion parameterization required for task execution. Using the Recipe1M + dataset (Marín et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43, 187-203), encompassing over one million recipes, we systematically analysed action verbs and matched them to ACs by using direct matching and cosine similarity, achieving a coverage of 76.5%. For the unmatched verbs, we employ a neuro-symbolic approach, matching verbs to existing AGs or generating new action cores utilizing a Large Language Model Our findings highlight the versatility of AKGs in adapting general plans to specific robotic tasks, validated through an experimental application in a meal preparation scenario. This work sets a foundation for adaptive robotic systems capable of performing a wide array of complex culinary tasks with minimal human intervention.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1682031"},"PeriodicalIF":3.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12605030/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145514757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1684845
Rui Wang, Ruiqi Wang, Hao Hu, Huai Yu
Introduction: Animal-involved scenarios pose significant challenges for autonomous driving systems due to their rarity, unpredictability, and safety-critical nature. Despite their importance, existing vision-language datasets for autonomous driving largely overlook these long-tail situations.
Methods: To address this gap, we introduce AniDriveQA, a novel visual question answering (VQA) dataset specifically designed to evaluate vision-language models (VLMs) in driving scenarios involving animals. The dataset is constructed through a scalable pipeline that collects diverse animal-related traffic scenes from internet videos, filters and annotates them using object detection and scene classification models, and generates multi-task VQA labels with a large vision-language model. AniDriveQA includes three key task types: scene description, animal description, and driving suggestion.
Results: For evaluation, a hybrid scheme was employed that combined classification accuracy for structured tasks with LLM-based scoring for open-ended responses. Extensive experiments on various open-source VLMs revealed large performance disparities across models and task types.
Discussion: The experimental results demonstrate that AniDriveQA effectively exposes the limitations of current VLMs in rare yet safety-critical autonomous driving scenarios. The dataset provides a valuable diagnostic benchmark for advancing reasoning, perception, and decision-making capabilities in future vision-language models.
{"title":"AniDriveQA: a VQA dataset for driving scenes with animal presence.","authors":"Rui Wang, Ruiqi Wang, Hao Hu, Huai Yu","doi":"10.3389/frobt.2025.1684845","DOIUrl":"10.3389/frobt.2025.1684845","url":null,"abstract":"<p><strong>Introduction: </strong>Animal-involved scenarios pose significant challenges for autonomous driving systems due to their rarity, unpredictability, and safety-critical nature. Despite their importance, existing vision-language datasets for autonomous driving largely overlook these long-tail situations.</p><p><strong>Methods: </strong>To address this gap, we introduce AniDriveQA, a novel visual question answering (VQA) dataset specifically designed to evaluate vision-language models (VLMs) in driving scenarios involving animals. The dataset is constructed through a scalable pipeline that collects diverse animal-related traffic scenes from internet videos, filters and annotates them using object detection and scene classification models, and generates multi-task VQA labels with a large vision-language model. AniDriveQA includes three key task types: scene description, animal description, and driving suggestion.</p><p><strong>Results: </strong>For evaluation, a hybrid scheme was employed that combined classification accuracy for structured tasks with LLM-based scoring for open-ended responses. Extensive experiments on various open-source VLMs revealed large performance disparities across models and task types.</p><p><strong>Discussion: </strong>The experimental results demonstrate that AniDriveQA effectively exposes the limitations of current VLMs in rare yet safety-critical autonomous driving scenarios. The dataset provides a valuable diagnostic benchmark for advancing reasoning, perception, and decision-making capabilities in future vision-language models.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1684845"},"PeriodicalIF":3.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12604350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1695262
Georgiy N Kuplinov
Limited battery capacity poses a challenge for autonomous robots. We believe that instead of relying solely on electric motors and batteries, basically Conventional Autonomous Robots (CAR), one way to address this challenge may be to develop Biohybrid Autonomous Robots (BAR), based on current achievements of the field of biohybrid robotics. The BAR approach is based on the facts that fat store high amount of energy, that biological muscles generate decent force per unit of cross-sectional area and that biological muscles have capability for regeneration and adaptation compared to electric motors. To reach conclusions about the feasibility of BAR, this study uses data from the fields of muscle energetics, robotics, engineering, physiology, biomechanics and others to perform analysis and interdisciplinary calculations. Our calculations show that the BAR approach is up to 5.1 times more efficient in terms of the mass of energy substrate to useful energy transported than the Conventional Autonomous Robots (CAR) with mass-produced batteries in an ideal scenario. The study also presents the model for determining the point of the rational use of the BAR, taking into the account basal metabolism of living systems. The results of this study provide a preliminary basis for further research of the BAR, putting it into the context of the other possible solutions for energy autonomy problem: Generator-Powered Autonomous Robots (GPAR) and Fuell-Cell Autonomous Robots (FCAR).
{"title":"The biohybrid autonomous robots (BAR): a feasibility of implementation.","authors":"Georgiy N Kuplinov","doi":"10.3389/frobt.2025.1695262","DOIUrl":"10.3389/frobt.2025.1695262","url":null,"abstract":"<p><p>Limited battery capacity poses a challenge for autonomous robots. We believe that instead of relying solely on electric motors and batteries, basically Conventional Autonomous Robots (CAR), one way to address this challenge may be to develop Biohybrid Autonomous Robots (BAR), based on current achievements of the field of biohybrid robotics. The BAR approach is based on the facts that fat store high amount of energy, that biological muscles generate decent force per unit of cross-sectional area and that biological muscles have capability for regeneration and adaptation compared to electric motors. To reach conclusions about the feasibility of BAR, this study uses data from the fields of muscle energetics, robotics, engineering, physiology, biomechanics and others to perform analysis and interdisciplinary calculations. Our calculations show that the BAR approach is up to 5.1 times more efficient in terms of the mass of energy substrate to useful energy transported than the Conventional Autonomous Robots (CAR) with mass-produced batteries in an ideal scenario. The study also presents the model for determining the point of the rational use of the BAR, taking into the account basal metabolism of living systems. The results of this study provide a preliminary basis for further research of the BAR, putting it into the context of the other possible solutions for energy autonomy problem: Generator-Powered Autonomous Robots (GPAR) and Fuell-Cell Autonomous Robots (FCAR).</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1695262"},"PeriodicalIF":3.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12603390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1604506
S K Surya Prakash, Darshankumar Prajapati, Bhuvan Narula, Amit Shukla
This paper presents a robust vision-based motion planning framework for dual-arm manipulators that introduces a novel three-way force equilibrium with velocity-dependent stabilization. The framework combines an improved Artificial Potential Field (iAPF) for linear velocity control with a Proportional-Derivative (PD) controller for angular velocity, creating a hybrid twist command for precise manipulation. A priority-based state machine enables human-like asymmetric dual-arm manipulation. Lyapunov stability analysis proves the asymptotic convergence to desired configurations. The method introduces a computationally efficient continuous distance calculation between links based on line segment configurations, enabling real-time collision monitoring. Experimental validation integrates a real-time vision system using YOLOv8 OBB that achieves 20 frames per second with 0.99/0.97 detection accuracy for bolts/nuts. Comparative tests against traditional APF methods demonstrate that the proposed approach provides stabilized motion planning with smoother trajectories and optimized spatial separation, effectively preventing inter-arm collisions during industrial component sorting.
{"title":"iAPF: an improved artificial potential field framework for asymmetric dual-arm manipulation with real-time inter-arm collision avoidance.","authors":"S K Surya Prakash, Darshankumar Prajapati, Bhuvan Narula, Amit Shukla","doi":"10.3389/frobt.2025.1604506","DOIUrl":"10.3389/frobt.2025.1604506","url":null,"abstract":"<p><p>This paper presents a robust vision-based motion planning framework for dual-arm manipulators that introduces a novel three-way force equilibrium with velocity-dependent stabilization. The framework combines an improved Artificial Potential Field (iAPF) for linear velocity control with a Proportional-Derivative (PD) controller for angular velocity, creating a hybrid twist command for precise manipulation. A priority-based state machine enables human-like asymmetric dual-arm manipulation. Lyapunov stability analysis proves the asymptotic convergence to desired configurations. The method introduces a computationally efficient continuous distance calculation between links based on line segment configurations, enabling real-time collision monitoring. Experimental validation integrates a real-time vision system using YOLOv8 OBB that achieves 20 frames per second with 0.99/0.97 detection accuracy for bolts/nuts. Comparative tests against traditional APF methods demonstrate that the proposed approach provides stabilized motion planning with smoother trajectories and optimized spatial separation, effectively preventing inter-arm collisions during industrial component sorting.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1604506"},"PeriodicalIF":3.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12602476/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145507522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1659302
Stephanie Tulk Jesso, William George Kennedy, Nele Russwinkel, Levern Currie
{"title":"Editorial: The translation and implementation of robotics and embodied AI in healthcare.","authors":"Stephanie Tulk Jesso, William George Kennedy, Nele Russwinkel, Levern Currie","doi":"10.3389/frobt.2025.1659302","DOIUrl":"10.3389/frobt.2025.1659302","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1659302"},"PeriodicalIF":3.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12598029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145496678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1687825
Federico Allione, Maria Lazzaroni, Antonios E Gkikakis, Christian Di Natali, Luigi Monica, Darwin G Caldwell, Jesús Ortiz
Musculoskeletal disorders, particularly low back pain, are some of the most common occupational health issues globally, causing significant personal suffering and economic burdens. Workers performing repetitive manual material handling tasks are especially at risk. FleXo, a lightweight (1.35 kg), flexible, ergonomic, and passive back-support exoskeleton is intended to reduce lower back strain during lifting tasks while allowing full freedom of movement for activities like walking, sitting, or side bending. FleXo's design results from an advanced multi-objective design optimization approach that balances functionality and user comfort. In this work, validated through user feedback in a series of relevant repetitive tasks, it is demonstrated that FleXo can reduce the perceived physical effort during lifting tasks, enhance user satisfaction, improve employee wellbeing, promote workplace safety, decrease injuries, and lower the costs (both to society and companies) associated with lower back pain and injury.
{"title":"FleXo: a flexible passive exoskeleton optimized for reducing lower back strain in manual handling tasks.","authors":"Federico Allione, Maria Lazzaroni, Antonios E Gkikakis, Christian Di Natali, Luigi Monica, Darwin G Caldwell, Jesús Ortiz","doi":"10.3389/frobt.2025.1687825","DOIUrl":"10.3389/frobt.2025.1687825","url":null,"abstract":"<p><p>Musculoskeletal disorders, particularly low back pain, are some of the most common occupational health issues globally, causing significant personal suffering and economic burdens. Workers performing repetitive manual material handling tasks are especially at risk. FleXo, a lightweight (1.35 kg), flexible, ergonomic, and passive back-support exoskeleton is intended to reduce lower back strain during lifting tasks while allowing full freedom of movement for activities like walking, sitting, or side bending. FleXo's design results from an advanced multi-objective design optimization approach that balances functionality and user comfort. In this work, validated through user feedback in a series of relevant repetitive tasks, it is demonstrated that FleXo can reduce the perceived physical effort during lifting tasks, enhance user satisfaction, improve employee wellbeing, promote workplace safety, decrease injuries, and lower the costs (both to society and companies) associated with lower back pain and injury.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1687825"},"PeriodicalIF":3.0,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12588867/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145483487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}