Pub Date : 2025-11-12eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1629884
Dimitris Voskakis, Martin Føre, Eirik Svendsen, Aleksander Perlic Liland, Sonia Rey Planellas, Harkaitz Eguiraun, Pascal Klebert
In recent years, several studies that analyze and interpret fish behavioral patterns in aquaculture settings have been published. Understanding how the fish react and respond to various scenarios and treatments can help provide insight and knowledge on how to achieve sustainable and efficient aquaculture production. Many of these research efforts have been conducted in land based tanks as this allows for closer and more continuous monitoring of the fish than what is possible at commercial facilities, essentially improving data quality and hence the possible insights to gain from these. However, most experimental tanks are closed-loop environments that are not particularly similar to commercial production units, as a consequence the results obtained in these systems are not directly transferable to industrial setups. Moreover, tank monitoring in such trials is often done using a single or a limited selection of different observation modes, which may not be sufficient to capture the full dynamics of fish responses. The present study seeks to address these challenges by developing the Cyber-Enhanced tank environment for aquaculture research. This concept features a tank environment setup to simulate the prevailing conditions in aquaculture units, mimicking natural light conditions, hiding sensors and other systems to reduce impacts on the fish and potential collisions, and using a tank color known to stimulate positive welfare in farmed fish. The tank was equipped with a novel sensor suite for high-fidelity detection and monitoring of fish behaviors based on a combination of an event camera, a scanning imaging sonar and conventional cameras. This innovative concept represents a step towards conducting experimental setups that are both more realistic in that conditions resemble those in commercial facilities and that uses a multi-modal sensor approach to capture details in fish responses in behaviors. The setup will be used as a basis for future fish responses experiments monitoring experiments in intensive aquaculture tanks.
{"title":"The cyber-enhanced tank: a novel concept for increased realism and multi-modal monitoring in tank based finfish aquaculture research.","authors":"Dimitris Voskakis, Martin Føre, Eirik Svendsen, Aleksander Perlic Liland, Sonia Rey Planellas, Harkaitz Eguiraun, Pascal Klebert","doi":"10.3389/frobt.2025.1629884","DOIUrl":"https://doi.org/10.3389/frobt.2025.1629884","url":null,"abstract":"<p><p>In recent years, several studies that analyze and interpret fish behavioral patterns in aquaculture settings have been published. Understanding how the fish react and respond to various scenarios and treatments can help provide insight and knowledge on how to achieve sustainable and efficient aquaculture production. Many of these research efforts have been conducted in land based tanks as this allows for closer and more continuous monitoring of the fish than what is possible at commercial facilities, essentially improving data quality and hence the possible insights to gain from these. However, most experimental tanks are closed-loop environments that are not particularly similar to commercial production units, as a consequence the results obtained in these systems are not directly transferable to industrial setups. Moreover, tank monitoring in such trials is often done using a single or a limited selection of different observation modes, which may not be sufficient to capture the full dynamics of fish responses. The present study seeks to address these challenges by developing the Cyber-Enhanced tank environment for aquaculture research. This concept features a tank environment setup to simulate the prevailing conditions in aquaculture units, mimicking natural light conditions, hiding sensors and other systems to reduce impacts on the fish and potential collisions, and using a tank color known to stimulate positive welfare in farmed fish. The tank was equipped with a novel sensor suite for high-fidelity detection and monitoring of fish behaviors based on a combination of an event camera, a scanning imaging sonar and conventional cameras. This innovative concept represents a step towards conducting experimental setups that are both more realistic in that conditions resemble those in commercial facilities and that uses a multi-modal sensor approach to capture details in fish responses in behaviors. The setup will be used as a basis for future fish responses experiments monitoring experiments in intensive aquaculture tanks.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1629884"},"PeriodicalIF":3.0,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12646893/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145641148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1628795
Anouk Neerincx, Julian Plat, Maartje M A De Graaf
Introduction: Socially assistive robots (SARs) have shown promise in pediatric healthcare by helping children manage the stress and anxiety associated with medical procedures. However, limited research exists on the specific robot behaviors that are most effective in reducing negative emotions in children during stressful interventions. This study aimed to compare the effectiveness of two emotional support strategies provided by a SAR during a vaccination event: internal emotion regulation through a guided breathing exercise and external emotion regulation via motivational speech and physical comfort (hugging). Additionally, we compared the effects of active and passive participation in the two SAR interventions.
Methods: A field study was conducted during annual group vaccination days, involving 225 children aged 8-12 years. Emotional and behavioral outcomes, including anxiety, fear, trust, and willingness to engage with the robot, were measured using self-report questionnaires.
Results: Results indicated that while girls reported higher levels of fear and anxiety than boys, active participation in the SAR intervention led to greater reductions in fear and anxiety, particularly among girls. Additionally, active hugging enhanced both engagement and trust, with girls showing a stronger response to such a physical comfort intervention.
Discussion: These findings indicate that, within the constraints of this study, SAR interventions were associated with reduced negative emotions in children during vaccinations, with active participation and physical comfort being particularly impactful for emotional support. This study offers valuable insights into optimizing SAR interventions in pediatric healthcare.
{"title":"Socially assistive robots in child healthcare: evaluating internal and external emotion regulation interventions.","authors":"Anouk Neerincx, Julian Plat, Maartje M A De Graaf","doi":"10.3389/frobt.2025.1628795","DOIUrl":"10.3389/frobt.2025.1628795","url":null,"abstract":"<p><strong>Introduction: </strong>Socially assistive robots (SARs) have shown promise in pediatric healthcare by helping children manage the stress and anxiety associated with medical procedures. However, limited research exists on the specific robot behaviors that are most effective in reducing negative emotions in children during stressful interventions. This study aimed to compare the effectiveness of two emotional support strategies provided by a SAR during a vaccination event: internal emotion regulation through a guided breathing exercise and external emotion regulation via motivational speech and physical comfort (hugging). Additionally, we compared the effects of active and passive participation in the two SAR interventions.</p><p><strong>Methods: </strong>A field study was conducted during annual group vaccination days, involving 225 children aged 8-12 years. Emotional and behavioral outcomes, including anxiety, fear, trust, and willingness to engage with the robot, were measured using self-report questionnaires.</p><p><strong>Results: </strong>Results indicated that while girls reported higher levels of fear and anxiety than boys, active participation in the SAR intervention led to greater reductions in fear and anxiety, particularly among girls. Additionally, active hugging enhanced both engagement and trust, with girls showing a stronger response to such a physical comfort intervention.</p><p><strong>Discussion: </strong>These findings indicate that, within the constraints of this study, SAR interventions were associated with reduced negative emotions in children during vaccinations, with active participation and physical comfort being particularly impactful for emotional support. This study offers valuable insights into optimizing SAR interventions in pediatric healthcare.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1628795"},"PeriodicalIF":3.0,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12635533/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-06eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1668910
Yunwei Zhang, Jing Tian, Qiaochu Xiong
Embodied intelligent systems build upon the foundations of behavioral robotics and classical cognitive architectures. They integrate multimodal perception, world modeling, and adaptive control to support closed-loop interaction in dynamic and uncertain environments. Recent breakthroughs in Multimodal Large Models (MLMs) and World Models (WMs) are profoundly transforming this field, providing the tools to achieve its long-envisioned capabilities of semantic understanding and robust generalization. Targeting the central challenge of how modern MLMs and WMs jointly advance embodied intelligence, this review provides a comprehensive overview across key dimensions, including multimodal perception, cross-modal alignment, adaptive decision-making, and Sim-to-Real transfer. Furthermore, we systematize these components into a three-stage theoretical framework termed "Dynamic Perception-Task Adaptation (DP-TA)". This framework integrates multimodal perception modeling, causally driven world state prediction, and semantically guided strategy optimization, establishing a comprehensive "perception-modeling-decision" loop. To support this, we introduce a "Feature-Conditioned Modal Alignment (F-CMA)" mechanism to enhance cross-modal fusion under task constraints.
{"title":"A review of embodied intelligence systems: a three-layer framework integrating multimodal perception, world modeling, and structured strategies.","authors":"Yunwei Zhang, Jing Tian, Qiaochu Xiong","doi":"10.3389/frobt.2025.1668910","DOIUrl":"10.3389/frobt.2025.1668910","url":null,"abstract":"<p><p>Embodied intelligent systems build upon the foundations of behavioral robotics and classical cognitive architectures. They integrate multimodal perception, world modeling, and adaptive control to support closed-loop interaction in dynamic and uncertain environments. Recent breakthroughs in Multimodal Large Models (MLMs) and World Models (WMs) are profoundly transforming this field, providing the tools to achieve its long-envisioned capabilities of semantic understanding and robust generalization. Targeting the central challenge of how modern MLMs and WMs jointly advance embodied intelligence, this review provides a comprehensive overview across key dimensions, including multimodal perception, cross-modal alignment, adaptive decision-making, and Sim-to-Real transfer. Furthermore, we systematize these components into a three-stage theoretical framework termed \"Dynamic Perception-Task Adaptation (DP-TA)\". This framework integrates multimodal perception modeling, causally driven world state prediction, and semantically guided strategy optimization, establishing a comprehensive \"perception-modeling-decision\" loop. To support this, we introduce a \"Feature-Conditioned Modal Alignment (F-CMA)\" mechanism to enhance cross-modal fusion under task constraints.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1668910"},"PeriodicalIF":3.0,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12631203/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-06eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1639445
Mariya Anto, Koena Mukherjee, Koel Mukherjee
In macro world, the robots are programmed machines, engineered to perform repetitive and often specialized tasks. Scaling down the size of a robot by a billionth of a meter gives a nano robot. The primary driving force behind the advent of micro and nano robots have always been the domain of medical technology and these robots are popularly known as nano bio-robots. The current review shines a light on these bio-robots in all their facets, encompassing both top-down and bottom-up fabrication approaches to their associated challenges followed by ethical approvals. The study describes in detail the synthesis techniques of nano bio-robot along with required actuation mechanism of the bio-robots. Further, in this paper, how a nano biorobotic drug-delivery system (NDDS) can deliver the drugs in a controlled way to the targeted site of the host in contrast to conventional drug administration is discussed. The paper also reviews and summarizes the administration pathways of these bio-robots in the human body and their efficacy in reducing various disorders. Overall, it can be said that the integration of nano robots with bio-concept presents distinct advantages and possesses significant promises for many applications.
{"title":"Nano bio-robots: a new frontier in targeted therapeutic delivery.","authors":"Mariya Anto, Koena Mukherjee, Koel Mukherjee","doi":"10.3389/frobt.2025.1639445","DOIUrl":"10.3389/frobt.2025.1639445","url":null,"abstract":"<p><p>In macro world, the robots are programmed machines, engineered to perform repetitive and often specialized tasks. Scaling down the size of a robot by a billionth of a meter gives a nano robot. The primary driving force behind the advent of micro and nano robots have always been the domain of medical technology and these robots are popularly known as nano bio-robots. The current review shines a light on these bio-robots in all their facets, encompassing both top-down and bottom-up fabrication approaches to their associated challenges followed by ethical approvals. The study describes in detail the synthesis techniques of nano bio-robot along with required actuation mechanism of the bio-robots. Further, in this paper, how a nano biorobotic drug-delivery system (NDDS) can deliver the drugs in a controlled way to the targeted site of the host in contrast to conventional drug administration is discussed. The paper also reviews and summarizes the administration pathways of these bio-robots in the human body and their efficacy in reducing various disorders. Overall, it can be said that the integration of nano robots with bio-concept presents distinct advantages and possesses significant promises for many applications.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1639445"},"PeriodicalIF":3.0,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12631375/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-05eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1638667
Taisei Nishishita, Genya Ishigami
As part of the robotics technologies required for In-situ resource utilization (ISRU), the development of cargo rovers for transporting resources is needed. However, these cargo rovers have unique technical challenges that differ from conventional exploration rovers, including the need to traverse rough terrains with their varying mass due to transporting payloads. Moreover, research addressing these challenges has been limited, and the relevant technologies have not been fully established. To address these challenges, this paper proposes a parametric model for estimating wheel slippage. The model is formulated as a function of four input parameters: slope angle, rover heading angle, payload mass, and wheel angular velocity, and is applicable to resource-transporting rovers with varying mass. Additionally, the use of a parametric model reduces computational load, which offers advantages for onboard implementation. The proposed estimation model was quantitatively evaluated by comparing datasets obtained from multi-body dynamics analysis. This paper also introduces a new traversability assessment model which incorporates the proposed slip estimation model. We demonstrated the proposed model by integrating it into a sampling based motion planning. The simulation result of the motion planning show that the planner with our model can generate safer motions and enables the rover to reach the target regardless of the cargo payload.
{"title":"Slip estimation model for traversability-based motion planning of cargo rover on extraterrestrial surface.","authors":"Taisei Nishishita, Genya Ishigami","doi":"10.3389/frobt.2025.1638667","DOIUrl":"https://doi.org/10.3389/frobt.2025.1638667","url":null,"abstract":"<p><p>As part of the robotics technologies required for <i>In-situ</i> resource utilization (ISRU), the development of cargo rovers for transporting resources is needed. However, these cargo rovers have unique technical challenges that differ from conventional exploration rovers, including the need to traverse rough terrains with their varying mass due to transporting payloads. Moreover, research addressing these challenges has been limited, and the relevant technologies have not been fully established. To address these challenges, this paper proposes a parametric model for estimating wheel slippage. The model is formulated as a function of four input parameters: slope angle, rover heading angle, payload mass, and wheel angular velocity, and is applicable to resource-transporting rovers with varying mass. Additionally, the use of a parametric model reduces computational load, which offers advantages for onboard implementation. The proposed estimation model was quantitatively evaluated by comparing datasets obtained from multi-body dynamics analysis. This paper also introduces a new traversability assessment model which incorporates the proposed slip estimation model. We demonstrated the proposed model by integrating it into a sampling based motion planning. The simulation result of the motion planning show that the planner with our model can generate safer motions and enables the rover to reach the target regardless of the cargo payload.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1638667"},"PeriodicalIF":3.0,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12626833/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-05eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1633131
Yuchen Zhao, Yuxin Chen
Robotic surfaces consisting of many actuators can change shape to perform tasks, such as object transportation and sorting. Increasing the number of actuators can enhance the robot's capacity, but controlling a large number of actuators is a challenging problem that includes issues such as the increased system-wide refresh time. We propose a novel control method that has constant refresh times, no matter how many actuators are in the robot. Having a distributed nature, the method first approximates target shapes, then broadcasts the approximation coefficients to the actuators and relies on itself to compute the inputs. To confirm the system size-independent scaling, we build a robot surface and measure the refresh time as a function of the number of actuators. We also perform experiments to approximate target shapes, and a good agreement between the experiments and theoretical predictions is achieved. Our method is more efficient because it requires fewer control messages to coordinate robot surfaces with the same accuracy. We also present a modeling strategy for the complex robot-object interaction force based on our control method and derive a feedback controller for object transportation tasks. This feedback controller is further tested by object transportation experiments, and the results demonstrate the validity of the model and the controller.
{"title":"A shape control and object manipulation technique based on function approximation for robotic surfaces.","authors":"Yuchen Zhao, Yuxin Chen","doi":"10.3389/frobt.2025.1633131","DOIUrl":"10.3389/frobt.2025.1633131","url":null,"abstract":"<p><p>Robotic surfaces consisting of many actuators can change shape to perform tasks, such as object transportation and sorting. Increasing the number of actuators can enhance the robot's capacity, but controlling a large number of actuators is a challenging problem that includes issues such as the increased system-wide refresh time. We propose a novel control method that has constant refresh times, no matter how many actuators are in the robot. Having a distributed nature, the method first approximates target shapes, then broadcasts the approximation coefficients to the actuators and relies on itself to compute the inputs. To confirm the system size-independent scaling, we build a robot surface and measure the refresh time as a function of the number of actuators. We also perform experiments to approximate target shapes, and a good agreement between the experiments and theoretical predictions is achieved. Our method is more efficient because it requires fewer control messages to coordinate robot surfaces with the same accuracy. We also present a modeling strategy for the complex robot-object interaction force based on our control method and derive a feedback controller for object transportation tasks. This feedback controller is further tested by object transportation experiments, and the results demonstrate the validity of the model and the controller.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1633131"},"PeriodicalIF":3.0,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12626863/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145565835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1680285
Sufola Das Chagas Silva Araujo, Goh Kah Ong Michael, Uttam U Deshpande, Sudhindra Deshpande, Manjunath G Avalappa, Yash Amasi, Sumit Patil, Swathi Bhat, Sudarshan Karigoudar
Current industrial robots deployed in small and medium-sized businesses (SMEs) are too complex, expensive, or dependent on external computing resources. In order to bridge this gap, we introduce an autonomous logistics robot that combines adaptive control and visual perception on a small edge computing platform. The NVIDIA Jetson Nano was equipped with a modified ResNet-18 model that allowed it to concurrently execute three tasks: object-handling zone recognition, obstacle detection, and path tracking. A lightweight rack-and-pinion mechanism enables payload lifting of up to 2 kg without external assistance. Experimental evaluation in semi-structured warehouse settings demonstrated a path tracking accuracy of 92%, obstacle avoidance success of 88%, and object handling success of 90%, with a maximum perception-to-action latency of 150 m. The system maintains stable operation for up to 3 hours on a single charge. Unlike other approaches that focus on single functions or require cloud support, our design integrates navigation, perception, and mechanical handling into a low-power, standalone solution. This highlights its potential as a practical and cost-effective automation platform for SMEs.
{"title":"ResNet-18 based multi-task visual inference and adaptive control for an edge-deployed autonomous robot.","authors":"Sufola Das Chagas Silva Araujo, Goh Kah Ong Michael, Uttam U Deshpande, Sudhindra Deshpande, Manjunath G Avalappa, Yash Amasi, Sumit Patil, Swathi Bhat, Sudarshan Karigoudar","doi":"10.3389/frobt.2025.1680285","DOIUrl":"10.3389/frobt.2025.1680285","url":null,"abstract":"<p><p>Current industrial robots deployed in small and medium-sized businesses (SMEs) are too complex, expensive, or dependent on external computing resources. In order to bridge this gap, we introduce an autonomous logistics robot that combines adaptive control and visual perception on a small edge computing platform. The NVIDIA Jetson Nano was equipped with a modified ResNet-18 model that allowed it to concurrently execute three tasks: object-handling zone recognition, obstacle detection, and path tracking. A lightweight rack-and-pinion mechanism enables payload lifting of up to 2 kg without external assistance. Experimental evaluation in semi-structured warehouse settings demonstrated a path tracking accuracy of 92%, obstacle avoidance success of 88%, and object handling success of 90%, with a maximum perception-to-action latency of 150 m. The system maintains stable operation for up to 3 hours on a single charge. Unlike other approaches that focus on single functions or require cloud support, our design integrates navigation, perception, and mechanical handling into a low-power, standalone solution. This highlights its potential as a practical and cost-effective automation platform for SMEs.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1680285"},"PeriodicalIF":3.0,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12624282/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145557698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1681187
Mau Adachi, Masayuki Kakio
As service robots become increasingly integrated into public spaces, effective communication between robots and humans is essential. Elevators, being common shared spaces, present unique challenges and opportunities for such interactions. In this study, we developed a Human-Facility Interaction (HFI) system to facilitate communication between service robots and passengers in elevator environments. The system provided both verbal (voice announcements) and non-verbal (light signals) information to passengers waiting for an elevator alongside a service robot. We installed the system in a hotel and conducted two experiments involving 31 participants to evaluate its impact on passengers' impressions of the elevator and the robot. Our findings revealed that voice-based information significantly improved passengers' impressions and reduced perceived waiting time. However, light-based information had minimal impact on impressions and unexpectedly increased perceived waiting time. These results offer valuable insights for designing future HFI systems to support the integration of service robots in buildings.
{"title":"Human-facility interaction improving people's understanding of service robots and elevators - system design and evaluation.","authors":"Mau Adachi, Masayuki Kakio","doi":"10.3389/frobt.2025.1681187","DOIUrl":"10.3389/frobt.2025.1681187","url":null,"abstract":"<p><p>As service robots become increasingly integrated into public spaces, effective communication between robots and humans is essential. Elevators, being common shared spaces, present unique challenges and opportunities for such interactions. In this study, we developed a Human-Facility Interaction (HFI) system to facilitate communication between service robots and passengers in elevator environments. The system provided both verbal (voice announcements) and non-verbal (light signals) information to passengers waiting for an elevator alongside a service robot. We installed the system in a hotel and conducted two experiments involving 31 participants to evaluate its impact on passengers' impressions of the elevator and the robot. Our findings revealed that voice-based information significantly improved passengers' impressions and reduced perceived waiting time. However, light-based information had minimal impact on impressions and unexpectedly increased perceived waiting time. These results offer valuable insights for designing future HFI systems to support the integration of service robots in buildings.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1681187"},"PeriodicalIF":3.0,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12620198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145551645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1674421
Kamilya Smagulova, Ahmed Elsheikh, Diego A Silva, Mohammed E Fouda, Ahmed M Eltawil
Autonomous driving has the potential to enhance driving comfort and accessibility, reduce accidents, and improve road safety, with vision sensors playing a key role in enabling vehicle autonomy. Among existing sensors, event-based cameras offer advantages such as a high dynamic range, low power consumption, and enhanced motion detection capabilities compared to traditional frame-based cameras. However, their sparse and asynchronous data present unique processing challenges that require specialized algorithms and hardware. While some models originally developed for frame-based inputs have been adapted to handle event data, they often fail to fully exploit the distinct properties of this novel data format, primarily due to its fundamental structural differences. As a result, new algorithms, including neuromorphic, have been developed specifically for event data. Many of these models are still in the early stages and often lack the maturity and accuracy of traditional approaches. This survey paper focuses on end-to-end event-based object detection for autonomous driving, covering key aspects such as sensing and processing hardware designs, datasets, and algorithms, including dense, spiking, and graph-based neural networks, along with relevant encoding and pre-processing techniques. In addition, this work highlights the shortcomings in the evaluation practices to ensure fair and meaningful comparisons across different event data processing approaches and hardware platforms. Within the scope of this survey, system-level throughput was evaluated from raw event data to model output on an RTX 4090 24GB GPU for several state-of-the-art models using the GEN1 and 1MP datasets. The study also includes a discussion and outlines potential directions for future research.
{"title":"Efficient and real-time perception: a survey on end-to-end event-based object detection in autonomous driving.","authors":"Kamilya Smagulova, Ahmed Elsheikh, Diego A Silva, Mohammed E Fouda, Ahmed M Eltawil","doi":"10.3389/frobt.2025.1674421","DOIUrl":"10.3389/frobt.2025.1674421","url":null,"abstract":"<p><p>Autonomous driving has the potential to enhance driving comfort and accessibility, reduce accidents, and improve road safety, with vision sensors playing a key role in enabling vehicle autonomy. Among existing sensors, event-based cameras offer advantages such as a high dynamic range, low power consumption, and enhanced motion detection capabilities compared to traditional frame-based cameras. However, their sparse and asynchronous data present unique processing challenges that require specialized algorithms and hardware. While some models originally developed for frame-based inputs have been adapted to handle event data, they often fail to fully exploit the distinct properties of this novel data format, primarily due to its fundamental structural differences. As a result, new algorithms, including neuromorphic, have been developed specifically for event data. Many of these models are still in the early stages and often lack the maturity and accuracy of traditional approaches. This survey paper focuses on end-to-end event-based object detection for autonomous driving, covering key aspects such as sensing and processing hardware designs, datasets, and algorithms, including dense, spiking, and graph-based neural networks, along with relevant encoding and pre-processing techniques. In addition, this work highlights the shortcomings in the evaluation practices to ensure fair and meaningful comparisons across different event data processing approaches and hardware platforms. Within the scope of this survey, system-level throughput was evaluated from raw event data to model output on an RTX 4090 24GB GPU for several state-of-the-art models using the GEN1 and 1MP datasets. The study also includes a discussion and outlines potential directions for future research.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1674421"},"PeriodicalIF":3.0,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12620194/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145551676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cleaning PV (photovoltaic) panels is essential for a PV station, as dirt or dust reduces the effective irradiation of solar energy and weakens the efficiency of converting solar energy into free electrons. The inconsistent (cleaning efficacy) and unsafe (summarized voltage and current) manual method is a challenge for a PV station. Therefore, this paper develops a cleaning robot with PV detection, path planning, and action control. Firstly, a lightweight Mobile-VIT (Mobile Vision Transformer) model with a Self-Attention mechanism was used to improve YOLOv8 (You Only Look Once v8), resulting in an accuracy of 91.08% and a processing speed of 215 fps (frames per second). Secondly, an A* and a DWA (Dynamic Window Approach) path planning algorithm were improved. The simulation result shows that the time consumption decreased from 1.19 to 0.66 s and the Turn Number decreased from 23 to 10 p (places). Finally, the robot was evaluated and calibrated in both indoor and outdoor environments. The results showed that the algorithm can successfully clean PV arrays without manual control, with the rate increasing by 23% after its implementation. This study supports the maintenance of PV stations and serves as a reference for technical applications of deep learning, computer vision, and robot navigation.
清洁光伏板对于光伏电站来说是必不可少的,因为污垢或灰尘会降低太阳能的有效辐射,并降低将太阳能转化为自由电子的效率。人工方法的不一致(清洁效果)和不安全(汇总电压和电流)是光伏电站面临的挑战。因此,本文开发了一种具有PV检测、路径规划和动作控制的清洁机器人。首先,采用具有自注意机制的轻量级Mobile- vit (Mobile Vision Transformer)模型对YOLOv8 (You Only Look Once v8)进行改进,使准确率达到91.08%,处理速度达到215 fps(帧/秒)。其次,对A*和DWA (Dynamic Window Approach)路径规划算法进行了改进。仿真结果表明,该算法耗时从1.19 s减少到0.66 s,转数从23位减少到10位。最后,在室内和室外环境下对机器人进行了评估和校准。结果表明,该算法可以在不需要人工控制的情况下成功清洗光伏阵列,实现后的清洗率提高了23%。本研究为光伏电站维护提供支持,为深度学习、计算机视觉、机器人导航等技术应用提供参考。
{"title":"A photovoltaic panel cleaning robot with a lightweight YOLO v8.","authors":"Jidong Luo, Guoyi Wang, Yanjiao Lei, Dong Wang, Yayong Chen, Hongzhou Zhang","doi":"10.3389/frobt.2025.1606774","DOIUrl":"10.3389/frobt.2025.1606774","url":null,"abstract":"<p><p>Cleaning PV (photovoltaic) panels is essential for a PV station, as dirt or dust reduces the effective irradiation of solar energy and weakens the efficiency of converting solar energy into free electrons. The inconsistent (cleaning efficacy) and unsafe (summarized voltage and current) manual method is a challenge for a PV station. Therefore, this paper develops a cleaning robot with PV detection, path planning, and action control. Firstly, a lightweight Mobile-VIT (Mobile Vision Transformer) model with a Self-Attention mechanism was used to improve YOLOv8 (You Only Look Once v8), resulting in an accuracy of 91.08% and a processing speed of 215 fps (frames per second). Secondly, an A* and a DWA (Dynamic Window Approach) path planning algorithm were improved. The simulation result shows that the time consumption decreased from 1.19 to 0.66 s and the Turn Number decreased from 23 to 10 p (places). Finally, the robot was evaluated and calibrated in both indoor and outdoor environments. The results showed that the algorithm can successfully clean PV arrays without manual control, with the rate increasing by 23% after its implementation. This study supports the maintenance of PV stations and serves as a reference for technical applications of deep learning, computer vision, and robot navigation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1606774"},"PeriodicalIF":3.0,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12615241/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145543298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}