Pub Date : 2024-11-27eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1485177
Stijn Kindt, Elias Thiery, Stijn Hamelryckx, Adrien Deraes, Tom Verstraten
This paper presents the design of the passive upper limb exosuit that won the design competition in the 2023 ASTM Exo Games. The tasks were first analyzed to provide information about the requirements of the design. Then a design was proposed based on the HeroWear Apex exosuit but with improvements from the competition team members. The four tasks of the competition are discussed in detail, including good and poor execution practice. Experiments are performed to measure the forces generated in the elastic elements that support the back and the ones that support the arms. Flex tests are also discussed to show that the exosuit does not hinder the movement of the user in a meaningful way when it is switched off. The performance during the tasks is discussed and based on this and designs of competitors, improvements to the overall design are proposed for future versions.
{"title":"Development of an upper limb passive exosuit for the 2023 ASTM Exo Games.","authors":"Stijn Kindt, Elias Thiery, Stijn Hamelryckx, Adrien Deraes, Tom Verstraten","doi":"10.3389/frobt.2024.1485177","DOIUrl":"10.3389/frobt.2024.1485177","url":null,"abstract":"<p><p>This paper presents the design of the passive upper limb exosuit that won the design competition in the 2023 ASTM Exo Games. The tasks were first analyzed to provide information about the requirements of the design. Then a design was proposed based on the HeroWear Apex exosuit but with improvements from the competition team members. The four tasks of the competition are discussed in detail, including good and poor execution practice. Experiments are performed to measure the forces generated in the elastic elements that support the back and the ones that support the arms. Flex tests are also discussed to show that the exosuit does not hinder the movement of the user in a meaningful way when it is switched off. The performance during the tasks is discussed and based on this and designs of competitors, improvements to the overall design are proposed for future versions.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1485177"},"PeriodicalIF":2.9,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11631734/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1511126
Loris Roveda
{"title":"Editorial: Human-robot collaboration in Industry 5.0: a human-centric AI-based approach.","authors":"Loris Roveda","doi":"10.3389/frobt.2024.1511126","DOIUrl":"https://doi.org/10.3389/frobt.2024.1511126","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1511126"},"PeriodicalIF":2.9,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11628371/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142807631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1457926
Esther I Zoller, Sibylle von Ballmoos, Nicolas Gerig, Philippe C Cattin, Georg Rauter
Introduction: Ergonomic issues are widespread among surgeons performing teleoperated robotic surgery. As the ergonomics of a teleoperation system depends on the controller handle, it needs to be designed wisely. While the importance of the controller handle in robot-assisted telemanipulation has been highlighted previously, most existing work on the usability of a human-robot system for surgery was of qualitative nature or did not focus on surgery-specific tasks.
Methods: We investigated the influence of nine different grasp-type telemanipulator handles on the usability of a lambda.6 haptic input device for a virtual six degrees of freedom peg-in-hole task. User performance with different handles was assessed through four usability metrics: i) task completion time, ii) dimensionless jerk, iii) collision forces, and iv) perceived workload. We compared these usability results with those of a prior study examining only the functional rotational workspace of the same human-robot system.
Results: The linear mixed-effect model (LMM) analysis showed that all four usability metrics were dependent on the telemanipulator handle. Moreover, the LMM analysis showed an additional contribution of the hole accessibility to the usability of the human-robot system.
Discussion: In case contact forces between the follower end-effector and its surroundings are not critical, the fixed-hook-grasp handle showed the best results out of the nine tested handles. In case low contact forces are crucial, the tripod-grasp handle was most suitable. It can thus be deduced that different grasp-type telemanipulator handles affect system usability for a surgery-related, teleoperated six degrees of freedom placement task. Also, maximizing the functional rotational workspace can positively affect system usability.
{"title":"Handle shape influences system usability in telemanipulation.","authors":"Esther I Zoller, Sibylle von Ballmoos, Nicolas Gerig, Philippe C Cattin, Georg Rauter","doi":"10.3389/frobt.2024.1457926","DOIUrl":"10.3389/frobt.2024.1457926","url":null,"abstract":"<p><strong>Introduction: </strong>Ergonomic issues are widespread among surgeons performing teleoperated robotic surgery. As the ergonomics of a teleoperation system depends on the controller handle, it needs to be designed wisely. While the importance of the controller handle in robot-assisted telemanipulation has been highlighted previously, most existing work on the usability of a human-robot system for surgery was of qualitative nature or did not focus on surgery-specific tasks.</p><p><strong>Methods: </strong>We investigated the influence of nine different grasp-type telemanipulator handles on the usability of a lambda.6 haptic input device for a virtual six degrees of freedom peg-in-hole task. User performance with different handles was assessed through four usability metrics: i) task completion time, ii) dimensionless jerk, iii) collision forces, and iv) perceived workload. We compared these usability results with those of a prior study examining only the functional rotational workspace of the same human-robot system.</p><p><strong>Results: </strong>The linear mixed-effect model (LMM) analysis showed that all four usability metrics were dependent on the telemanipulator handle. Moreover, the LMM analysis showed an additional contribution of the hole accessibility to the usability of the human-robot system.</p><p><strong>Discussion: </strong>In case contact forces between the follower end-effector and its surroundings are not critical, the <i>fixed-hook</i>-grasp handle showed the best results out of the nine tested handles. In case low contact forces are crucial, the <i>tripod</i>-grasp handle was most suitable. It can thus be deduced that different grasp-type telemanipulator handles affect system usability for a surgery-related, teleoperated six degrees of freedom placement task. Also, maximizing the functional rotational workspace can positively affect system usability.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1457926"},"PeriodicalIF":2.9,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11620994/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the realm of precision cattle health monitoring, this paper introduces the development and evaluation of a novel wearable continuous health monitoring device designed for cattle. The device integrates a sustainable solar-powered module, real-time signal acquisition and processing, and a storage module within an animal ergonomically designed curved casing for non-invasive cattle health monitoring. The curvature of the casing is tailored to better fit the contours of the cattle's neck, significantly enhancing signal accuracy, particularly in temperature signal acquisition. The core module is equipped with precision temperature sensors and inertial measurement units, utilizing the Arduino MKR ZERO board for data acquisition and processing. Field tests conducted on a cohort of ten cattle not only validated the accuracy of temperature sensing but also demonstrated the potential of machine learning, particularly the Support Vector Machine algorithm, for precise behavior classification and step counting, with an average accuracy of 97.27%. This study innovatively combines real-time temperature recognition, behavior classification, and step counting organically within a self-powered device. The results underscore the feasibility of this technology in enhancing cattle welfare and farm management efficiency, providing clear direction for future research to further enhance these devices for large-scale applications.
{"title":"Design of an intelligent wearable device for real-time cattle health monitoring.","authors":"Zhenhua Yu, Yalou Han, Lukas Cha, Shihong Chen, Zeyu Wang, Yang Zhang","doi":"10.3389/frobt.2024.1441960","DOIUrl":"10.3389/frobt.2024.1441960","url":null,"abstract":"<p><p>In the realm of precision cattle health monitoring, this paper introduces the development and evaluation of a novel wearable continuous health monitoring device designed for cattle. The device integrates a sustainable solar-powered module, real-time signal acquisition and processing, and a storage module within an animal ergonomically designed curved casing for non-invasive cattle health monitoring. The curvature of the casing is tailored to better fit the contours of the cattle's neck, significantly enhancing signal accuracy, particularly in temperature signal acquisition. The core module is equipped with precision temperature sensors and inertial measurement units, utilizing the Arduino MKR ZERO board for data acquisition and processing. Field tests conducted on a cohort of ten cattle not only validated the accuracy of temperature sensing but also demonstrated the potential of machine learning, particularly the Support Vector Machine algorithm, for precise behavior classification and step counting, with an average accuracy of 97.27%. This study innovatively combines real-time temperature recognition, behavior classification, and step counting organically within a self-powered device. The results underscore the feasibility of this technology in enhancing cattle welfare and farm management efficiency, providing clear direction for future research to further enhance these devices for large-scale applications.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1441960"},"PeriodicalIF":2.9,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11617366/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142785738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robotic probe manipulator for echocardography (echo) can potentially reduce cardiac radiologists' physical burden. Echo procedure with industrial robots has wide Range of Motion (RoM) but poses safety risks because the robot may clamp the patient against the bed. Conversely, a soft robotic manipulator for echo has safe contact force but suffers from a limited RoM. Due to COVID-19, cardiac radiologists explored performing echo in the prone-positioned patients, which yielded good-quality images but was difficult to perform manually. From robot design perspective, prone position allows safer robot without clamping issue because all actuators are under the patient with minimal RoM to reach the cardiac windows. In this work, we propose a robotic probe manipulator for echo in the prone position employing a combination of a delta 3D printer and a soft end-effector and investigate its feasibility in a clinical setting. We implemented the robot as a scanner type device in which the probe manipulator scans from under a bed with an opening around the chest area. The doctor controls the robot with a joystick and a keypad while looking at a camera view of the chest area and the ultrasound display as feedback. For the experiments, three doctors and three medical students scanned the parasternal window of the same healthy subject with the robot and then manually. Two expert cardiologists evaluated the captured ultrasound images. All medical personnel could obtain all the required views with the robot, but the scanning time was considerably longer than the manual one. The ultrasound image quality scores of the doctors' group remained constant between manual and robotic scans. However, the image scores of the robotic scan were lower in the students' group. In summary, this work verified the ability to obtain clinically sufficient images in echocardiography in the prone position by expert medical doctors using the proposed robotic probe manipulator. Our robot can be further developed with semi automatic procedure to serve as a platform for safe and ergonomic echocardiography.
{"title":"On the feasibility of a robotic probe manipulator for echocardiography in the prone position.","authors":"Muhammad Wildan Gifari, Tomoko Machino-Ohtsuka, Takeshi Machino, Modar Hassan, Kenji Suzuki","doi":"10.3389/frobt.2024.1474077","DOIUrl":"https://doi.org/10.3389/frobt.2024.1474077","url":null,"abstract":"<p><p>Robotic probe manipulator for echocardography (echo) can potentially reduce cardiac radiologists' physical burden. Echo procedure with industrial robots has wide Range of Motion (RoM) but poses safety risks because the robot may clamp the patient against the bed. Conversely, a soft robotic manipulator for echo has safe contact force but suffers from a limited RoM. Due to COVID-19, cardiac radiologists explored performing echo in the prone-positioned patients, which yielded good-quality images but was difficult to perform manually. From robot design perspective, prone position allows safer robot without clamping issue because all actuators are under the patient with minimal RoM to reach the cardiac windows. In this work, we propose a robotic probe manipulator for echo in the prone position employing a combination of a delta 3D printer and a soft end-effector and investigate its feasibility in a clinical setting. We implemented the robot as a scanner type device in which the probe manipulator scans from under a bed with an opening around the chest area. The doctor controls the robot with a joystick and a keypad while looking at a camera view of the chest area and the ultrasound display as feedback. For the experiments, three doctors and three medical students scanned the parasternal window of the same healthy subject with the robot and then manually. Two expert cardiologists evaluated the captured ultrasound images. All medical personnel could obtain all the required views with the robot, but the scanning time was considerably longer than the manual one. The ultrasound image quality scores of the doctors' group remained constant between manual and robotic scans. However, the image scores of the robotic scan were lower in the students' group. In summary, this work verified the ability to obtain clinically sufficient images in echocardiography in the prone position by expert medical doctors using the proposed robotic probe manipulator. Our robot can be further developed with semi automatic procedure to serve as a platform for safe and ergonomic echocardiography.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1474077"},"PeriodicalIF":2.9,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11610407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-18eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1450097
Seyed Mojtaba Karbasi, Alexander Refsum Jensenius, Rolf Inge Godøy, Jim Torresen
This paper investigates the potential of the intrinsically motivated reinforcement learning (IMRL) approach for robotic drumming. For this purpose, we implemented an IMRL-based algorithm for a drumming robot called ZRob, an underactuated two-DoF robotic arm with flexible grippers. Two ZRob robots were instructed to play rhythmic patterns derived from MIDI files. The RL algorithm is based on the deep deterministic policy gradient (DDPG) method, but instead of relying solely on extrinsic rewards, the robots are trained using a combination of both extrinsic and intrinsic reward signals. The results of the training experiments show that the utilization of intrinsic reward can lead to meaningful novel rhythmic patterns, while using only extrinsic reward would lead to predictable patterns identical to the MIDI inputs. Additionally, the observed drumming patterns are influenced not only by the learning algorithm but also by the robots' physical dynamics and the drum's constraints. This work suggests new insights into the potential of embodied intelligence for musical performance.
{"title":"Embodied intelligence for drumming; a reinforcement learning approach to drumming robots.","authors":"Seyed Mojtaba Karbasi, Alexander Refsum Jensenius, Rolf Inge Godøy, Jim Torresen","doi":"10.3389/frobt.2024.1450097","DOIUrl":"https://doi.org/10.3389/frobt.2024.1450097","url":null,"abstract":"<p><p>This paper investigates the potential of the intrinsically motivated reinforcement learning (IMRL) approach for robotic drumming. For this purpose, we implemented an IMRL-based algorithm for a drumming robot called <i>ZRob</i>, an underactuated two-DoF robotic arm with flexible grippers. Two ZRob robots were instructed to play rhythmic patterns derived from MIDI files. The RL algorithm is based on the deep deterministic policy gradient (DDPG) method, but instead of relying solely on extrinsic rewards, the robots are trained using a combination of both extrinsic and intrinsic reward signals. The results of the training experiments show that the utilization of intrinsic reward can lead to meaningful novel rhythmic patterns, while using only extrinsic reward would lead to predictable patterns identical to the MIDI inputs. Additionally, the observed drumming patterns are influenced not only by the learning algorithm but also by the robots' physical dynamics and the drum's constraints. This work suggests new insights into the potential of embodied intelligence for musical performance.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1450097"},"PeriodicalIF":2.9,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11609846/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: Advanced motion control and navigation of robots in extreme environments.","authors":"Allahyar Montazeri, Nargess Sadeghzadeh-Nokhodberiz, Khoshnam Shojaei, Kaspar Althoefer","doi":"10.3389/frobt.2024.1510013","DOIUrl":"https://doi.org/10.3389/frobt.2024.1510013","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1510013"},"PeriodicalIF":2.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11604719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1468385
Jiaxi Lu, Ryota Takamido, Yusheng Wang, Jun Ota
This study presents an experience-based hierarchical-structure optimization algorithm to address the robotic system environment design problem, which combines motion planning and environment arrangement problems together. The motion planning problem, which could be defined as a multiple-degree-of-freedom (m-DOF) problem, together with the environment arrangement problem, which could be defined as a free DOF problem, is a high-dimensional optimization problem. Therefore, the hierarchical structure was established, with the higher layer solving the environment arrangement problem and lower layer solving the problem of motion planning. Previously planned trajectories and past results for this design problem were first constructed as datasets; however, they cannot be seen as optimal. Therefore, this study proposed an experience-reuse manner, which selected the most "useful" experience from the datasets and reused it to query new problems, optimize the results in the datasets, and provide better environment arrangement with shorter path lengths within the same time. Therefore, a hierarchical structural caseGA-ERTC algorithm was proposed. In the higher layer, a novel approach employing the case-injected genetic algorithm (GA) was implemented to tackle optimization challenges in robot environment design, leveraging experiential insights. Performance indices in the arrangement of the robot system's environment were determined by the robotic arm's motion and path length calculated using an experience-driven random tree (ERT) algorithm. Moreover, the effectiveness of the proposed method is illustrated with the 12.59% decrease in path lengths by solving different settings of hard problems and 5.05% decrease in easy problems compared with other state-of-the-art methods in three small robots.
{"title":"How to arrange the robotic environment? Leveraging experience in both motion planning and environment optimization.","authors":"Jiaxi Lu, Ryota Takamido, Yusheng Wang, Jun Ota","doi":"10.3389/frobt.2024.1468385","DOIUrl":"https://doi.org/10.3389/frobt.2024.1468385","url":null,"abstract":"<p><p>This study presents an experience-based hierarchical-structure optimization algorithm to address the robotic system environment design problem, which combines motion planning and environment arrangement problems together. The motion planning problem, which could be defined as a multiple-degree-of-freedom (m-DOF) problem, together with the environment arrangement problem, which could be defined as a free DOF problem, is a high-dimensional optimization problem. Therefore, the hierarchical structure was established, with the higher layer solving the environment arrangement problem and lower layer solving the problem of motion planning. Previously planned trajectories and past results for this design problem were first constructed as datasets; however, they cannot be seen as optimal. Therefore, this study proposed an experience-reuse manner, which selected the most \"useful\" experience from the datasets and reused it to query new problems, optimize the results in the datasets, and provide better environment arrangement with shorter path lengths within the same time. Therefore, a hierarchical structural caseGA-ERTC algorithm was proposed. In the higher layer, a novel approach employing the case-injected genetic algorithm (GA) was implemented to tackle optimization challenges in robot environment design, leveraging experiential insights. Performance indices in the arrangement of the robot system's environment were determined by the robotic arm's motion and path length calculated using an experience-driven random tree (ERT) algorithm. Moreover, the effectiveness of the proposed method is illustrated with the 12.59% decrease in path lengths by solving different settings of hard problems and 5.05% decrease in easy problems compared with other state-of-the-art methods in three small robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1468385"},"PeriodicalIF":2.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11604589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1475069
Juan E Mora-Zarate, Claudia L Garzón-Castro, Jorge A Castellanos Rivillas
Sign languages are one of the main rehabilitation methods for dealing with hearing loss. Like any other language, the geographical location will influence on how signs are made. Particularly in Colombia, the hard of hearing population is lacking from education in the Colombian Sign Language, mainly due of the reduce number of interpreters in the educational sector. To help mitigate this problem, Machine Learning binded to data gloves or Computer Vision technologies have emerged to be the accessory of sign translation systems and educational tools, however, in Colombia the presence of this solutions is scarce. On the other hand, humanoid robots such as the NAO have shown significant results when used to support a learning process. This paper proposes a performance evaluation for the design of an activity to support the learning process of all the 11 color-based signs from the Colombian Sign Language. Which consists of an evaluation method with two modes activated through user interaction, the first mode will allow to choose the color sign to be evaluated, and the second will decide randomly the color sign. To achieve this, MediaPipe tool was used to extract torso and hand coordinates, which were the input for a Neural Network. The performance of the Neural Network was evaluated running continuously in two scenarios, first, video capture from the webcam of the computer which showed an overall F1 score of 91.6% and a prediction time of 85.2 m, second, wireless video streaming with NAO H25 V6 camera which had an F1 score of 93.8% and a prediction time of 2.29 s. In addition, we took advantage of the joint redundancy that NAO H25 V6 has, since with its 25 degrees of freedom we were able to use gestures that created nonverbal human-robot interactions, which may be useful in future works where we want to implement this activity with a deaf community.
{"title":"Learning signs with NAO: humanoid robot as a tool for helping to learn Colombian Sign Language.","authors":"Juan E Mora-Zarate, Claudia L Garzón-Castro, Jorge A Castellanos Rivillas","doi":"10.3389/frobt.2024.1475069","DOIUrl":"10.3389/frobt.2024.1475069","url":null,"abstract":"<p><p>Sign languages are one of the main rehabilitation methods for dealing with hearing loss. Like any other language, the geographical location will influence on how signs are made. Particularly in Colombia, the hard of hearing population is lacking from education in the Colombian Sign Language, mainly due of the reduce number of interpreters in the educational sector. To help mitigate this problem, Machine Learning binded to data gloves or Computer Vision technologies have emerged to be the accessory of sign translation systems and educational tools, however, in Colombia the presence of this solutions is scarce. On the other hand, humanoid robots such as the NAO have shown significant results when used to support a learning process. This paper proposes a performance evaluation for the design of an activity to support the learning process of all the 11 color-based signs from the Colombian Sign Language. Which consists of an evaluation method with two modes activated through user interaction, the first mode will allow to choose the color sign to be evaluated, and the second will decide randomly the color sign. To achieve this, MediaPipe tool was used to extract torso and hand coordinates, which were the input for a Neural Network. The performance of the Neural Network was evaluated running continuously in two scenarios, first, video capture from the webcam of the computer which showed an overall F1 score of 91.6% and a prediction time of 85.2 m, second, wireless video streaming with NAO H25 V6 camera which had an F1 score of 93.8% and a prediction time of 2.29 s. In addition, we took advantage of the joint redundancy that NAO H25 V6 has, since with its 25 degrees of freedom we were able to use gestures that created nonverbal human-robot interactions, which may be useful in future works where we want to implement this activity with a deaf community.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1475069"},"PeriodicalIF":2.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602449/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1495445
Husnu Halid Alabay, Tuan-Anh Le, Hakan Ceylan
In developing medical interventions using untethered milli- and microrobots, ensuring safety and effectiveness relies on robust methods for real-time robot detection, tracking, and precise localization within the body. The inherent non-transparency of human tissues significantly challenges these efforts, as traditional imaging systems like fluoroscopy often lack crucial anatomical details, potentially compromising intervention safety and efficacy. To address this technological gap, in this study, we build a virtual reality environment housing an exact digital replica (digital twin) of the operational workspace and a robot avatar. We synchronize the virtual and real workspaces and continuously send the robot position data derived from the image stream into the digital twin with short average delay time around 20-25 ms. This allows the operator to steer the robot by tracking its avatar within the digital twin with near real-time temporal resolution. We demonstrate the feasibility of this approach with millirobots steered in confined phantoms. Our concept demonstration herein can pave the way for not only improved procedural safety by complementing fluoroscopic guidance with virtual reality enhancement, but also provides a platform for incorporating various additional real-time derivative data, e.g., instantaneous robot velocity, intraoperative physiological data obtained from the patient, e.g., blood flow rate, and pre-operative physical simulation models, e.g., periodic body motions, to further refine robot control capacity.
{"title":"X-ray fluoroscopy guided localization and steering of miniature robots using virtual reality enhancement.","authors":"Husnu Halid Alabay, Tuan-Anh Le, Hakan Ceylan","doi":"10.3389/frobt.2024.1495445","DOIUrl":"10.3389/frobt.2024.1495445","url":null,"abstract":"<p><p>In developing medical interventions using untethered milli- and microrobots, ensuring safety and effectiveness relies on robust methods for real-time robot detection, tracking, and precise localization within the body. The inherent non-transparency of human tissues significantly challenges these efforts, as traditional imaging systems like fluoroscopy often lack crucial anatomical details, potentially compromising intervention safety and efficacy. To address this technological gap, in this study, we build a virtual reality environment housing an exact digital replica (digital twin) of the operational workspace and a robot avatar. We synchronize the virtual and real workspaces and continuously send the robot position data derived from the image stream into the digital twin with short average delay time around 20-25 ms. This allows the operator to steer the robot by tracking its avatar within the digital twin with near real-time temporal resolution. We demonstrate the feasibility of this approach with millirobots steered in confined phantoms. Our concept demonstration herein can pave the way for not only improved procedural safety by complementing fluoroscopic guidance with virtual reality enhancement, but also provides a platform for incorporating various additional real-time derivative data, e.g., instantaneous robot velocity, intraoperative physiological data obtained from the patient, e.g., blood flow rate, and pre-operative physical simulation models, e.g., periodic body motions, to further refine robot control capacity.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1495445"},"PeriodicalIF":2.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11599259/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142741106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}