Pub Date : 2024-04-16DOI: 10.3389/frobt.2024.1358857
Solomon Pekris, Robert D. Williams, Thibaud Atkins, Ioannis Georgilas, Nicola Bailey
Introduction: Compliant mechanisms, especially continuum robots, are becoming integral to advancements in minimally invasive surgery due to their ability to autonomously navigate natural pathways, significantly reducing collision severity. A major challenge lies in developing an effective control strategy to accurately reflect their behavior for enhanced operational precision.Methods: This study examines the trajectory tracking capabilities of a tendon-driven continuum robot at its tip. We introduce a novel feedforward control methodology that leverages a mathematical model based on Cosserat rod theory. To mitigate the computational challenges inherent in such models, we implement an implicit time discretization strategy. This approach simplifies the governing equations into space-domain ordinary differential equations, facilitating real-time computational efficiency. The control strategy is devised to enable the robot tip to follow a dynamically prescribed trajectory in two dimensions.Results: The efficacy of the proposed control method was validated through experimental tests on six different demand trajectories, with a motion capture system employed to assess positional accuracy. The findings indicate that the robot can track trajectories with an accuracy within 9.5%, showcasing consistent repeatability across different runs.Discussion: The results from this study mark a significant step towards establishing an efficient and precise control methodology for compliant continuum robots. The demonstrated accuracy and repeatability of the control approach significantly enhance the potential of these robots in minimally invasive surgical applications, paving the way for further research and development in this field.
{"title":"Model-based trajectory tracking of a compliant continuum robot","authors":"Solomon Pekris, Robert D. Williams, Thibaud Atkins, Ioannis Georgilas, Nicola Bailey","doi":"10.3389/frobt.2024.1358857","DOIUrl":"https://doi.org/10.3389/frobt.2024.1358857","url":null,"abstract":"Introduction: Compliant mechanisms, especially continuum robots, are becoming integral to advancements in minimally invasive surgery due to their ability to autonomously navigate natural pathways, significantly reducing collision severity. A major challenge lies in developing an effective control strategy to accurately reflect their behavior for enhanced operational precision.Methods: This study examines the trajectory tracking capabilities of a tendon-driven continuum robot at its tip. We introduce a novel feedforward control methodology that leverages a mathematical model based on Cosserat rod theory. To mitigate the computational challenges inherent in such models, we implement an implicit time discretization strategy. This approach simplifies the governing equations into space-domain ordinary differential equations, facilitating real-time computational efficiency. The control strategy is devised to enable the robot tip to follow a dynamically prescribed trajectory in two dimensions.Results: The efficacy of the proposed control method was validated through experimental tests on six different demand trajectories, with a motion capture system employed to assess positional accuracy. The findings indicate that the robot can track trajectories with an accuracy within 9.5%, showcasing consistent repeatability across different runs.Discussion: The results from this study mark a significant step towards establishing an efficient and precise control methodology for compliant continuum robots. The demonstrated accuracy and repeatability of the control approach significantly enhance the potential of these robots in minimally invasive surgical applications, paving the way for further research and development in this field.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"1 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140697788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.3389/frobt.2024.1337722
Michail-Antisthenis Tsompanas, Igor Balaz
Biohybrid machines (BHMs) are an amalgam of actuators composed of living cells with synthetic materials. They are engineered in order to improve autonomy, adaptability and energy efficiency beyond what conventional robots can offer. However, designing these machines is no trivial task for humans, provided the field’s short history and, thus, the limited experience and expertise on designing and controlling similar entities, such as soft robots. To unveil the advantages of BHMs, we propose to overcome the hindrances of their design process by developing a modular modeling and simulation framework for the digital design of BHMs that incorporates Artificial Intelligence powered algorithms. Here, we present the initial workings of the first module in an exemplar framework, namely, an evolutionary morphology generator. As proof-of-principle for this project, we use the scenario of developing a biohybrid catheter as a medical device capable of arriving to hard-to-reach regions of the human body to release drugs. We study the automatically generated morphology of actuators that will enable the functionality of that catheter. The primary results presented here enforced the update of the methodology used, in order to better depict the problem under study, while also provided insights for the future versions of the software module.
{"title":"Outline of an evolutionary morphology generator towards the modular design of a biohybrid catheter","authors":"Michail-Antisthenis Tsompanas, Igor Balaz","doi":"10.3389/frobt.2024.1337722","DOIUrl":"https://doi.org/10.3389/frobt.2024.1337722","url":null,"abstract":"Biohybrid machines (BHMs) are an amalgam of actuators composed of living cells with synthetic materials. They are engineered in order to improve autonomy, adaptability and energy efficiency beyond what conventional robots can offer. However, designing these machines is no trivial task for humans, provided the field’s short history and, thus, the limited experience and expertise on designing and controlling similar entities, such as soft robots. To unveil the advantages of BHMs, we propose to overcome the hindrances of their design process by developing a modular modeling and simulation framework for the digital design of BHMs that incorporates Artificial Intelligence powered algorithms. Here, we present the initial workings of the first module in an exemplar framework, namely, an evolutionary morphology generator. As proof-of-principle for this project, we use the scenario of developing a biohybrid catheter as a medical device capable of arriving to hard-to-reach regions of the human body to release drugs. We study the automatically generated morphology of actuators that will enable the functionality of that catheter. The primary results presented here enforced the update of the methodology used, in order to better depict the problem under study, while also provided insights for the future versions of the software module.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"48 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140709905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.3389/frobt.2024.1359887
Canicius J. Mwitta, Glen C. Rains
Autonomous navigation in agricultural fields presents a unique challenge due to the unpredictable outdoor environment. Various approaches have been explored to tackle this task, each with its own set of challenges. These include GPS guidance, which faces availability issues and struggles to avoid obstacles, and vision guidance techniques, which are sensitive to changes in light, weeds, and crop growth. This study proposes a novel idea that combining GPS and visual navigation offers an optimal solution for autonomous navigation in agricultural fields. Three solutions for autonomous navigation in cotton fields were developed and evaluated. The first solution utilized a path tracking algorithm, Pure Pursuit, to follow GPS coordinates and guide a mobile robot. It achieved an average lateral deviation of 8.3 cm from the pre-recorded path. The second solution employed a deep learning model, specifically a fully convolutional neural network for semantic segmentation, to detect paths between cotton rows. The mobile rover then navigated using the Dynamic Window Approach (DWA) path planning algorithm, achieving an average lateral deviation of 4.8 cm from the desired path. Finally, the two solutions were integrated for a more practical approach. GPS served as a global planner to map the field, while the deep learning model and DWA acted as a local planner for navigation and real-time decision-making. This integrated solution enabled the robot to navigate between cotton rows with an average lateral distance error of 9.5 cm, offering a more practical method for autonomous navigation in cotton fields.
由于室外环境难以预测,农田自主导航是一项独特的挑战。人们探索了各种方法来应对这一任务,每种方法都有自己的挑战。这些方法包括全球定位系统导航和视觉导航技术,前者面临可用性问题,难以避开障碍物,后者对光线、杂草和作物生长的变化非常敏感。本研究提出了一个新想法,即结合全球定位系统和视觉导航,为农田自主导航提供最佳解决方案。研究人员开发并评估了三种棉田自主导航解决方案。第一种方案利用路径跟踪算法 Pure Pursuit 来跟踪 GPS 坐标并引导移动机器人。它与预先记录的路径的平均横向偏差为 8.3 厘米。第二个解决方案采用了深度学习模型,特别是用于语义分割的全卷积神经网络,来检测棉花行间的路径。然后,移动漫游车使用动态窗口法(DWA)路径规划算法进行导航,实现了与所需路径的平均横向偏差为 4.8 厘米。最后,这两种解决方案被整合为一种更实用的方法。全球定位系统作为全局规划器绘制实地地图,而深度学习模型和 DWA 则作为局部规划器进行导航和实时决策。这种集成解决方案使机器人能够在棉花行间导航,平均横向距离误差为 9.5 厘米,为棉田自主导航提供了一种更实用的方法。
{"title":"The integration of GPS and visual navigation for autonomous navigation of an Ackerman steering mobile robot in cotton fields","authors":"Canicius J. Mwitta, Glen C. Rains","doi":"10.3389/frobt.2024.1359887","DOIUrl":"https://doi.org/10.3389/frobt.2024.1359887","url":null,"abstract":"Autonomous navigation in agricultural fields presents a unique challenge due to the unpredictable outdoor environment. Various approaches have been explored to tackle this task, each with its own set of challenges. These include GPS guidance, which faces availability issues and struggles to avoid obstacles, and vision guidance techniques, which are sensitive to changes in light, weeds, and crop growth. This study proposes a novel idea that combining GPS and visual navigation offers an optimal solution for autonomous navigation in agricultural fields. Three solutions for autonomous navigation in cotton fields were developed and evaluated. The first solution utilized a path tracking algorithm, Pure Pursuit, to follow GPS coordinates and guide a mobile robot. It achieved an average lateral deviation of 8.3 cm from the pre-recorded path. The second solution employed a deep learning model, specifically a fully convolutional neural network for semantic segmentation, to detect paths between cotton rows. The mobile rover then navigated using the Dynamic Window Approach (DWA) path planning algorithm, achieving an average lateral deviation of 4.8 cm from the desired path. Finally, the two solutions were integrated for a more practical approach. GPS served as a global planner to map the field, while the deep learning model and DWA acted as a local planner for navigation and real-time decision-making. This integrated solution enabled the robot to navigate between cotton rows with an average lateral distance error of 9.5 cm, offering a more practical method for autonomous navigation in cotton fields.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"42 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140710903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.3389/frobt.2024.1392297
Ziheng Xu, Zehao Wu, Zichen Xu, Qingsong Xu
Oral administration is a convenient drug delivery method in our daily lives. However, it remains a challenge to achieve precise target delivery and ensure the efficacy of medications in extreme environments within the digestive system with complex environments. This paper proposes an oral multilayer magnetic hydrogel microrobot for targeted delivery and on-demand release driven by a gradient magnetic field. The inner hydrogel shells enclose designated drugs and magnetic microparticles. The outer hydrogel shells enclose the inner hydrogel shells, magnetic microparticles, and pH neutralizers. The drug release procedure is remotely implemented layer-by-layer. When the required gradient magnetic field is applied, the outer hydrogel shells are destroyed to release their inclusions. The enclosed pH neutralizers scour the surrounding environment to avoid damaging drugs by the pH environment. Subsequently, the inner hydrogel shells are destroyed to release the drugs. A set of experiments are conducted to demonstrate the wirelessly controllable target delivery and release in a Petri dish and biological tissues. The results demonstrated attractive advantages of the reported microrobot in microcargo delivery with almost no loss, remote controllable release, and drug protection by the pH neutralizers. It is a promising approach to advance next-generation precision oral therapies in the digestive system.
{"title":"Magnetic multilayer hydrogel oral microrobots for digestive tract treatment","authors":"Ziheng Xu, Zehao Wu, Zichen Xu, Qingsong Xu","doi":"10.3389/frobt.2024.1392297","DOIUrl":"https://doi.org/10.3389/frobt.2024.1392297","url":null,"abstract":"Oral administration is a convenient drug delivery method in our daily lives. However, it remains a challenge to achieve precise target delivery and ensure the efficacy of medications in extreme environments within the digestive system with complex environments. This paper proposes an oral multilayer magnetic hydrogel microrobot for targeted delivery and on-demand release driven by a gradient magnetic field. The inner hydrogel shells enclose designated drugs and magnetic microparticles. The outer hydrogel shells enclose the inner hydrogel shells, magnetic microparticles, and pH neutralizers. The drug release procedure is remotely implemented layer-by-layer. When the required gradient magnetic field is applied, the outer hydrogel shells are destroyed to release their inclusions. The enclosed pH neutralizers scour the surrounding environment to avoid damaging drugs by the pH environment. Subsequently, the inner hydrogel shells are destroyed to release the drugs. A set of experiments are conducted to demonstrate the wirelessly controllable target delivery and release in a Petri dish and biological tissues. The results demonstrated attractive advantages of the reported microrobot in microcargo delivery with almost no loss, remote controllable release, and drug protection by the pH neutralizers. It is a promising approach to advance next-generation precision oral therapies in the digestive system.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"16 48","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140712072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.3389/frobt.2024.1267072
Md Rejwanul Haque, Md Rafi Islam, Edward Sazonov, Xiangrong Shen
Robotic lower-limb prostheses, with their actively powered joints, may significantly improve amputee users’ mobility and enable them to obtain healthy-like gait in various modes of locomotion in daily life. However, timely recognition of the amputee users’ locomotive mode and mode transition still remains a major challenge in robotic lower-limb prosthesis control. In the paper, the authors present a new multi-dimensional dynamic time warping (mDTW)-based intent recognizer to provide high-accuracy recognition of the locomotion mode/mode transition sufficiently early in the swing phase, such that the prosthesis’ joint-level motion controller can operate in the correct locomotive mode and assist the user to complete the desired (and often power-demanding) motion in the stance phase. To support the intent recognizer development, the authors conducted a multi-modal gait data collection study to obtain the related sensor signal data in various modes of locomotion. The collected data were then segmented into individual cycles, generating the templates used in the mDTW classifier. Considering the large number of sensor signals available, we conducted feature selection to identify the most useful sensor signals as the input to the mDTW classifier. We also augmented the standard mDTW algorithm with a voting mechanism to make full use of the data generated from the multiple subjects. To validate the proposed intent recognizer, we characterized its performance using the data cumulated at different percentages of progression into the gait cycle (starting from the beginning of the swing phase). It was shown that the mDTW classifier was able to recognize three locomotive mode/mode transitions (walking, walking to stair climbing, and walking to stair descending) with 99.08% accuracy at 30% progression into the gait cycle, well before the stance phase starts. With its high performance, low computational load, and easy personalization (through individual template generation), the proposed mDTW intent recognizer may become a highly useful building block of a prosthesis control system to facilitate the robotic prostheses’ real-world use among lower-limb amputees.
{"title":"Swing-phase detection of locomotive mode transitions for smooth multi-functional robotic lower-limb prosthesis control","authors":"Md Rejwanul Haque, Md Rafi Islam, Edward Sazonov, Xiangrong Shen","doi":"10.3389/frobt.2024.1267072","DOIUrl":"https://doi.org/10.3389/frobt.2024.1267072","url":null,"abstract":"Robotic lower-limb prostheses, with their actively powered joints, may significantly improve amputee users’ mobility and enable them to obtain healthy-like gait in various modes of locomotion in daily life. However, timely recognition of the amputee users’ locomotive mode and mode transition still remains a major challenge in robotic lower-limb prosthesis control. In the paper, the authors present a new multi-dimensional dynamic time warping (mDTW)-based intent recognizer to provide high-accuracy recognition of the locomotion mode/mode transition sufficiently early in the swing phase, such that the prosthesis’ joint-level motion controller can operate in the correct locomotive mode and assist the user to complete the desired (and often power-demanding) motion in the stance phase. To support the intent recognizer development, the authors conducted a multi-modal gait data collection study to obtain the related sensor signal data in various modes of locomotion. The collected data were then segmented into individual cycles, generating the templates used in the mDTW classifier. Considering the large number of sensor signals available, we conducted feature selection to identify the most useful sensor signals as the input to the mDTW classifier. We also augmented the standard mDTW algorithm with a voting mechanism to make full use of the data generated from the multiple subjects. To validate the proposed intent recognizer, we characterized its performance using the data cumulated at different percentages of progression into the gait cycle (starting from the beginning of the swing phase). It was shown that the mDTW classifier was able to recognize three locomotive mode/mode transitions (walking, walking to stair climbing, and walking to stair descending) with 99.08% accuracy at 30% progression into the gait cycle, well before the stance phase starts. With its high performance, low computational load, and easy personalization (through individual template generation), the proposed mDTW intent recognizer may become a highly useful building block of a prosthesis control system to facilitate the robotic prostheses’ real-world use among lower-limb amputees.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"26 25","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140711696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.3389/frobt.2024.1347985
Basheer Al-Tawil, Thorsten Hempel, Ahmed A. Abdelrahman, A. Al-Hamadi
Visual simultaneous localization and mapping (V-SLAM) plays a crucial role in the field of robotic systems, especially for interactive and collaborative mobile robots. The growing reliance on robotics has increased complexity in task execution in real-world applications. Consequently, several types of V-SLAM methods have been revealed to facilitate and streamline the functions of robots. This work aims to showcase the latest V-SLAM methodologies, offering clear selection criteria for researchers and developers to choose the right approach for their robotic applications. It chronologically presents the evolution of SLAM methods, highlighting key principles and providing comparative analyses between them. The paper focuses on the integration of the robotic ecosystem with a robot operating system (ROS) as Middleware, explores essential V-SLAM benchmark datasets, and presents demonstrative figures for each method’s workflow.
视觉同步定位与映射(V-SLAM)在机器人系统领域,尤其是交互式协作移动机器人领域发挥着至关重要的作用。在现实世界的应用中,对机器人技术的依赖与日俱增,增加了任务执行的复杂性。因此,已经出现了多种 V-SLAM 方法来促进和简化机器人的功能。这项工作旨在展示最新的 V-SLAM 方法,为研究人员和开发人员提供明确的选择标准,以便为他们的机器人应用选择正确的方法。它按时间顺序介绍了 SLAM 方法的演变,突出了关键原则,并提供了它们之间的比较分析。论文重点介绍了作为中间件的机器人操作系统(ROS)与机器人生态系统的整合,探讨了重要的 V-SLAM 基准数据集,并展示了每种方法工作流程的演示图。
{"title":"A review of visual SLAM for robotics: evolution, properties, and future applications","authors":"Basheer Al-Tawil, Thorsten Hempel, Ahmed A. Abdelrahman, A. Al-Hamadi","doi":"10.3389/frobt.2024.1347985","DOIUrl":"https://doi.org/10.3389/frobt.2024.1347985","url":null,"abstract":"Visual simultaneous localization and mapping (V-SLAM) plays a crucial role in the field of robotic systems, especially for interactive and collaborative mobile robots. The growing reliance on robotics has increased complexity in task execution in real-world applications. Consequently, several types of V-SLAM methods have been revealed to facilitate and streamline the functions of robots. This work aims to showcase the latest V-SLAM methodologies, offering clear selection criteria for researchers and developers to choose the right approach for their robotic applications. It chronologically presents the evolution of SLAM methods, highlighting key principles and providing comparative analyses between them. The paper focuses on the integration of the robotic ecosystem with a robot operating system (ROS) as Middleware, explores essential V-SLAM benchmark datasets, and presents demonstrative figures for each method’s workflow.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140716473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.3389/frobt.2024.1298537
Alexandre L. Ratschat, Bob M. van Rooij, Johannes Luijten, Laura Marchal-Crespo
In current virtual reality settings for motor skill training, only visual information is usually provided regarding the virtual objects the trainee interacts with. However, information gathered through cutaneous (tactile feedback) and muscle mechanoreceptors (kinesthetic feedback) regarding, e.g., object shape, is crucial to successfully interact with those objects. To provide this essential information, previous haptic interfaces have targeted to render either tactile or kinesthetic feedback while the effectiveness of multimodal tactile and kinesthetic feedback on the perception of the characteristics of virtual objects still remains largely unexplored. Here, we present the results from an experiment we conducted with sixteen participants to evaluate the effectiveness of multimodal tactile and kinesthetic feedback on shape perception. Using a within-subject design, participants were asked to reproduce virtual shapes after exploring them without visual feedback and with either congruent tactile and kinesthetic feedback or with only kinesthetic feedback. Tactile feedback was provided with a cable-driven platform mounted on the fingertip, while kinesthetic feedback was provided using a haptic glove. To measure the participants’ ability to perceive and reproduce the rendered shapes, we measured the time participants spent exploring and reproducing the shapes and the error between the rendered and reproduced shapes after exploration. Furthermore, we assessed the participants’ workload and motivation using well-established questionnaires. We found that concurrent tactile and kinesthetic feedback during shape exploration resulted in lower reproduction errors and longer reproduction times. The longer reproduction times for the combined condition may indicate that participants could learn the shapes better and, thus, were more careful when reproducing them. We did not find differences between conditions in the time spent exploring the shapes or the participants’ workload and motivation. The lack of differences in workload between conditions could be attributed to the reported minimal-to-intermediate workload levels, suggesting that there was little room to further reduce the workload. Our work highlights the potential advantages of multimodal congruent tactile and kinesthetic feedback when interacting with tangible virtual objects with applications in virtual simulators for hands-on training applications.
{"title":"Evaluating tactile feedback in addition to kinesthetic feedback for haptic shape rendering: a pilot study","authors":"Alexandre L. Ratschat, Bob M. van Rooij, Johannes Luijten, Laura Marchal-Crespo","doi":"10.3389/frobt.2024.1298537","DOIUrl":"https://doi.org/10.3389/frobt.2024.1298537","url":null,"abstract":"In current virtual reality settings for motor skill training, only visual information is usually provided regarding the virtual objects the trainee interacts with. However, information gathered through cutaneous (tactile feedback) and muscle mechanoreceptors (kinesthetic feedback) regarding, e.g., object shape, is crucial to successfully interact with those objects. To provide this essential information, previous haptic interfaces have targeted to render either tactile or kinesthetic feedback while the effectiveness of multimodal tactile and kinesthetic feedback on the perception of the characteristics of virtual objects still remains largely unexplored. Here, we present the results from an experiment we conducted with sixteen participants to evaluate the effectiveness of multimodal tactile and kinesthetic feedback on shape perception. Using a within-subject design, participants were asked to reproduce virtual shapes after exploring them without visual feedback and with either congruent tactile and kinesthetic feedback or with only kinesthetic feedback. Tactile feedback was provided with a cable-driven platform mounted on the fingertip, while kinesthetic feedback was provided using a haptic glove. To measure the participants’ ability to perceive and reproduce the rendered shapes, we measured the time participants spent exploring and reproducing the shapes and the error between the rendered and reproduced shapes after exploration. Furthermore, we assessed the participants’ workload and motivation using well-established questionnaires. We found that concurrent tactile and kinesthetic feedback during shape exploration resulted in lower reproduction errors and longer reproduction times. The longer reproduction times for the combined condition may indicate that participants could learn the shapes better and, thus, were more careful when reproducing them. We did not find differences between conditions in the time spent exploring the shapes or the participants’ workload and motivation. The lack of differences in workload between conditions could be attributed to the reported minimal-to-intermediate workload levels, suggesting that there was little room to further reduce the workload. Our work highlights the potential advantages of multimodal congruent tactile and kinesthetic feedback when interacting with tangible virtual objects with applications in virtual simulators for hands-on training applications.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 2‐3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140720197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.3389/frobt.2024.1394849
Felipe N. Martins, José Lima, Andre Schneider de Oliveira, Paulo Costa, Amy Eguchi
(AMRs). The paper describes an innovative approach to the indoor localization system for the competition based on the Extended Kalman Filter (EKF) and ArUco markers. The authors tested and compared different innovation methods for the obtained observations in the EKF, validating their approach in a real scenario using a factory floor with the official specifications provided by the competition organization.
{"title":"Editorial: Educational robotics and competitions","authors":"Felipe N. Martins, José Lima, Andre Schneider de Oliveira, Paulo Costa, Amy Eguchi","doi":"10.3389/frobt.2024.1394849","DOIUrl":"https://doi.org/10.3389/frobt.2024.1394849","url":null,"abstract":"(AMRs). The paper describes an innovative approach to the indoor localization system for the competition based on the Extended Kalman Filter (EKF) and ArUco markers. The authors tested and compared different innovation methods for the obtained observations in the EKF, validating their approach in a real scenario using a factory floor with the official specifications provided by the competition organization.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"16 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140724298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.3389/frobt.2024.1330812
Jennifer Molnar, Varun Agrawal, Sonia Chernova
Successful operation of a teleoperated robot depends on a well-designed control scheme to translate human motion into robot motion; however, a single control scheme may not be suitable for all users. On the other hand, individual personalization of control schemes may be infeasible for designers to produce. In this paper, we present a method by which users may be classified into groups with mutually compatible control scheme preferences. Users are asked to demonstrate freehand motions to control a simulated robot in a virtual reality environment. Hand pose data is captured and compared with other users using SLAM trajectory similarity analysis techniques. The resulting pairwise trajectory error metrics are used to cluster participants based on their control motions, without foreknowledge of the number or types of control scheme preferences that may exist. The clusters identified for two different robots shows that a small number of clusters form stably for each case, each with its own control scheme paradigm. Survey data from participants validates that the clusters identified through this method correspond to the participants’ control scheme rationales, and also identify nuances in participant control scheme descriptions that may not be obvious to designers relying only on participant explanations of their preferences.
远距离操作机器人能否成功运行,取决于将人类运动转化为机器人运动的精心设计的控制方案;然而,单一的控制方案可能并不适合所有用户。另一方面,对于设计人员来说,要实现控制方案的个性化可能并不可行。在本文中,我们提出了一种方法,通过这种方法可以将用户划分为具有相互兼容的控制方案偏好的群体。我们要求用户在虚拟现实环境中演示控制模拟机器人的徒手动作。采集手部姿态数据,并使用 SLAM 轨迹相似性分析技术与其他用户进行比较。由此得出的成对轨迹误差度量用于根据参与者的控制动作对其进行分组,而不预先了解可能存在的控制方案偏好的数量或类型。两个不同机器人的聚类结果表明,每种情况下都能稳定地形成少量聚类,每个聚类都有自己的控制方案范例。来自参与者的调查数据证实,通过这种方法识别出的群组与参与者的控制方案理由相吻合,同时还识别出了参与者控制方案描述中的细微差别,而这些细微差别对于仅靠参与者解释其偏好的设计者来说可能并不明显。
{"title":"Clustering user preferences for personalized teleoperation control schemes via trajectory similarity analysis","authors":"Jennifer Molnar, Varun Agrawal, Sonia Chernova","doi":"10.3389/frobt.2024.1330812","DOIUrl":"https://doi.org/10.3389/frobt.2024.1330812","url":null,"abstract":"Successful operation of a teleoperated robot depends on a well-designed control scheme to translate human motion into robot motion; however, a single control scheme may not be suitable for all users. On the other hand, individual personalization of control schemes may be infeasible for designers to produce. In this paper, we present a method by which users may be classified into groups with mutually compatible control scheme preferences. Users are asked to demonstrate freehand motions to control a simulated robot in a virtual reality environment. Hand pose data is captured and compared with other users using SLAM trajectory similarity analysis techniques. The resulting pairwise trajectory error metrics are used to cluster participants based on their control motions, without foreknowledge of the number or types of control scheme preferences that may exist. The clusters identified for two different robots shows that a small number of clusters form stably for each case, each with its own control scheme paradigm. Survey data from participants validates that the clusters identified through this method correspond to the participants’ control scheme rationales, and also identify nuances in participant control scheme descriptions that may not be obvious to designers relying only on participant explanations of their preferences.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"21 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140721651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.3389/frobt.2024.1374999
Ganix Lasa, Giovanni Legnani, Hien Nguyen, Mondragon Assembly, Spain, D. Han, M. Y. Park, J. Choi, H. Shin, R. Behrens, S. Rhim
With the growing demand for robots in the industrial field, robot-related technologies with various functions have been introduced. One notable development is the implementation of robots that operate in collaboration with human workers to share tasks, without the need of any physical barriers such as safety fences. The realization of such collaborative operations in practice necessitates the assurance of safety if humans and robots collide. Thus, it is important to establish criteria for such collision scenarios to ensure robot safety and prevent injuries. Collision safety must be ensured in both pinching (quasi-static contact) and impact (transient contact) situations. To this end, we measured the force pain thresholds associated with impacts and evaluated the biomechanical limitations. This measurements were obtained through clinical trials involving physical collisions between human subjects and a device designed for generating impacts, and the force pain thresholds associated with transient collisions between humans and robots were analyzed. Specifically, the force pain threshold was measured at two different locations on the bodies of 37 adults aged 19–32 years, using two impactors with different shapes. The force pain threshold was compared with the results of other relevant studies. The results can help identify biomechanical limitations in a precise and reliable manner to ensure the safety of robots in collaborative applications.
{"title":"Evaluation of force pain thresholds to ensure collision safety in worker-robot collaborative operations","authors":"Ganix Lasa, Giovanni Legnani, Hien Nguyen, Mondragon Assembly, Spain, D. Han, M. Y. Park, J. Choi, H. Shin, R. Behrens, S. Rhim","doi":"10.3389/frobt.2024.1374999","DOIUrl":"https://doi.org/10.3389/frobt.2024.1374999","url":null,"abstract":"With the growing demand for robots in the industrial field, robot-related technologies with various functions have been introduced. One notable development is the implementation of robots that operate in collaboration with human workers to share tasks, without the need of any physical barriers such as safety fences. The realization of such collaborative operations in practice necessitates the assurance of safety if humans and robots collide. Thus, it is important to establish criteria for such collision scenarios to ensure robot safety and prevent injuries. Collision safety must be ensured in both pinching (quasi-static contact) and impact (transient contact) situations. To this end, we measured the force pain thresholds associated with impacts and evaluated the biomechanical limitations. This measurements were obtained through clinical trials involving physical collisions between human subjects and a device designed for generating impacts, and the force pain thresholds associated with transient collisions between humans and robots were analyzed. Specifically, the force pain threshold was measured at two different locations on the bodies of 37 adults aged 19–32 years, using two impactors with different shapes. The force pain threshold was compared with the results of other relevant studies. The results can help identify biomechanical limitations in a precise and reliable manner to ensure the safety of robots in collaborative applications.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"77 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140729232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}