Pub Date : 2024-04-08DOI: 10.3389/frobt.2024.1399217
K. Kyriakopoulos, Yangming Lee, Ivan Virgala, S. M. H. Sadati, Egidio Falotico
{"title":"Editorial: Design, modeling and control of kinematically redundant robots","authors":"K. Kyriakopoulos, Yangming Lee, Ivan Virgala, S. M. H. Sadati, Egidio Falotico","doi":"10.3389/frobt.2024.1399217","DOIUrl":"https://doi.org/10.3389/frobt.2024.1399217","url":null,"abstract":"","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"19 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140728702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.3389/frobt.2024.1352152
Michal Stolarz, Alex Mitrevski, Mohammad Wasil, P. Plöger
During robot-assisted therapy, a robot typically needs to be partially or fully controlled by therapists, for instance using a Wizard-of-Oz protocol; this makes therapeutic sessions tedious to conduct, as therapists cannot fully focus on the interaction with the person under therapy. In this work, we develop a learning-based behaviour model that can be used to increase the autonomy of a robot’s decision-making process. We investigate reinforcement learning as a model training technique and compare different reward functions that consider a user’s engagement and activity performance. We also analyse various strategies that aim to make the learning process more tractable, namely i) behaviour model training with a learned user model, ii) policy transfer between user groups, and iii) policy learning from expert feedback. We demonstrate that policy transfer can significantly speed up the policy learning process, although the reward function has an important effect on the actions that a robot can choose. Although the main focus of this paper is the personalisation pipeline itself, we further evaluate the learned behaviour models in a small-scale real-world feasibility study in which six users participated in a sequence learning game with an assistive robot. The results of this study seem to suggest that learning from guidance may result in the most adequate policies in terms of increasing the engagement and game performance of users, but a large-scale user study is needed to verify the validity of that observation.
{"title":"Learning-based personalisation of robot behaviour for robot-assisted therapy","authors":"Michal Stolarz, Alex Mitrevski, Mohammad Wasil, P. Plöger","doi":"10.3389/frobt.2024.1352152","DOIUrl":"https://doi.org/10.3389/frobt.2024.1352152","url":null,"abstract":"During robot-assisted therapy, a robot typically needs to be partially or fully controlled by therapists, for instance using a Wizard-of-Oz protocol; this makes therapeutic sessions tedious to conduct, as therapists cannot fully focus on the interaction with the person under therapy. In this work, we develop a learning-based behaviour model that can be used to increase the autonomy of a robot’s decision-making process. We investigate reinforcement learning as a model training technique and compare different reward functions that consider a user’s engagement and activity performance. We also analyse various strategies that aim to make the learning process more tractable, namely i) behaviour model training with a learned user model, ii) policy transfer between user groups, and iii) policy learning from expert feedback. We demonstrate that policy transfer can significantly speed up the policy learning process, although the reward function has an important effect on the actions that a robot can choose. Although the main focus of this paper is the personalisation pipeline itself, we further evaluate the learned behaviour models in a small-scale real-world feasibility study in which six users participated in a sequence learning game with an assistive robot. The results of this study seem to suggest that learning from guidance may result in the most adequate policies in terms of increasing the engagement and game performance of users, but a large-scale user study is needed to verify the validity of that observation.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"282 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140730274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
By supporting autonomy, aging in place, and wellbeing in later life, Socially Assistive Robots are expected to help humanity face the challenges posed by the rapid aging of the world’s population. For the successful acceptance and assimilation of SARs by older adults, it is necessary to understand the factors affecting their Quality Evaluations Previous studies examining Human-Robot Interaction in later life indicated that three aspects shape older adults’ overall QEs of robots: uses, constraints, and outcomes. However, studies were usually limited in duration, focused on acceptance rather than assimilation, and typically explored only one aspect of the interaction. In the present study, we examined uses, constraints, and outcomes simultaneously and over a long period. Nineteen community-dwelling older adults aged 75–97 were given a SAR for physical training for 6 weeks. Their experiences were documented via in-depth interviews conducted before and after the study period, short weekly telephone surveys, and reports produced by the robots. Analysis revealed two distinct groups: (A) The ‘Fans’ - participants who enjoyed using the SAR, attributed added value to it, and experienced a successful assimilation process; and (B) The ‘Skeptics’ - participants who did not like it, negatively evaluated its use, and experienced a disappointing assimilation process. Despite the vast differences between the groups, both reported more positive evaluations of SARs at the end of the study than before it began. Overall, the results indicated that the process of SARs’ assimilation is not homogeneous and provided a profound understanding of the factors shaping older adults’ QE of SARs following actual use. Additionally, the findings demonstrated the theoretical and practical usefulness of a holistic approach in researching older SARs users.
通过支持晚年生活的自主性、就地养老和幸福感,社会辅助机器人有望帮助人类应对世界人口快速老龄化带来的挑战。为使老年人成功接受和吸收社会辅助机器人,有必要了解影响其质量评价的因素。 以往对晚年人机交互的研究表明,有三个方面影响着老年人对机器人的总体质量评价:用途、制约因素和结果。不过,这些研究通常持续时间有限,重点关注接受度而非同化度,而且通常只探讨交互的一个方面。在本研究中,我们同时对机器人的使用、限制和结果进行了长期研究。19 位年龄在 75-97 岁之间、居住在社区的老年人接受了为期 6 周的特区体育训练。我们通过在研究前后进行的深入访谈、每周一次的简短电话调查以及机器人制作的报告记录了他们的体验。分析结果显示了两个不同的群体:(A) "粉丝"--喜欢使用 SAR 的参与者,认为它有附加价值,并经历了成功的同化过程;(B) "怀疑论者"--不喜欢 SAR 的参与者,对其使用有负面评价,并经历了令人失望的同化过程。尽管两组之间存在巨大差异,但在研究结束时,两组对 SAR 的评价都比研究开始前更积极。总之,研究结果表明,合成孔径雷达的同化过程并非千篇一律,并让我们深刻理解了影响老年人在实际使用合成孔径雷达后的质量评价的因素。此外,研究结果还证明了在研究老年 SAR 使用者时采用整体方法在理论和实践上的实用性。
{"title":"Assimilation of socially assistive robots’ by older adults: an interplay of uses, constraints and outcomes","authors":"Oded Zafrani, Galit Nimrod, Maya Krakovski, Shikhar Kumar, Simona Bar-Haim, Yael Edan","doi":"10.3389/frobt.2024.1337380","DOIUrl":"https://doi.org/10.3389/frobt.2024.1337380","url":null,"abstract":"By supporting autonomy, aging in place, and wellbeing in later life, Socially Assistive Robots are expected to help humanity face the challenges posed by the rapid aging of the world’s population. For the successful acceptance and assimilation of SARs by older adults, it is necessary to understand the factors affecting their Quality Evaluations Previous studies examining Human-Robot Interaction in later life indicated that three aspects shape older adults’ overall QEs of robots: uses, constraints, and outcomes. However, studies were usually limited in duration, focused on acceptance rather than assimilation, and typically explored only one aspect of the interaction. In the present study, we examined uses, constraints, and outcomes simultaneously and over a long period. Nineteen community-dwelling older adults aged 75–97 were given a SAR for physical training for 6 weeks. Their experiences were documented via in-depth interviews conducted before and after the study period, short weekly telephone surveys, and reports produced by the robots. Analysis revealed two distinct groups: (A) The ‘Fans’ - participants who enjoyed using the SAR, attributed added value to it, and experienced a successful assimilation process; and (B) The ‘Skeptics’ - participants who did not like it, negatively evaluated its use, and experienced a disappointing assimilation process. Despite the vast differences between the groups, both reported more positive evaluations of SARs at the end of the study than before it began. Overall, the results indicated that the process of SARs’ assimilation is not homogeneous and provided a profound understanding of the factors shaping older adults’ QE of SARs following actual use. Additionally, the findings demonstrated the theoretical and practical usefulness of a holistic approach in researching older SARs users.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"10 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140737066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional techniques for sharing paper documents in teleconferencing tend to introduce two inconsistencies: 1) media inconsistency: a paper document is converted into a digital image on the remote site; 2) space inconsistency: a workspace deliberately inverts the partner’s handwriting to make a document easy to read. In this paper, we present a novel system that eliminates these inconsistencies. The media and space inconsistencies are resolved by reproducing a real paper document on a remote site and allowing a user to handover the paper document to a remote partner across a videoconferencing display. From a series of experiments, we found that reproducing a real paper document contributes to a higher sense of information sharing. We also found that handing over a document enhances a sense of space sharing, regardless of whether the document is digital or paper-based. These findings provide insights into designing systems for sharing paper documents across distances.
{"title":"Tangible document sharing: handing over paper documents across a videoconferencing display","authors":"Kazuaki Tanaka, Kentaro Oshiro, Naomi Yamashita, Hideyuki Nakanishi","doi":"10.3389/frobt.2024.1303440","DOIUrl":"https://doi.org/10.3389/frobt.2024.1303440","url":null,"abstract":"Conventional techniques for sharing paper documents in teleconferencing tend to introduce two inconsistencies: 1) media inconsistency: a paper document is converted into a digital image on the remote site; 2) space inconsistency: a workspace deliberately inverts the partner’s handwriting to make a document easy to read. In this paper, we present a novel system that eliminates these inconsistencies. The media and space inconsistencies are resolved by reproducing a real paper document on a remote site and allowing a user to handover the paper document to a remote partner across a videoconferencing display. From a series of experiments, we found that reproducing a real paper document contributes to a higher sense of information sharing. We also found that handing over a document enhances a sense of space sharing, regardless of whether the document is digital or paper-based. These findings provide insights into designing systems for sharing paper documents across distances.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"10 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140737414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.3389/frobt.2024.1369566
Lucas Falch, K. Lohan
This paper presents a novel webcam-based approach for gaze estimation on computer screens. Utilizing appearance based gaze estimation models, the system provides a method for mapping the gaze vector from the user’s perspective onto the computer screen. Notably, it determines the user’s 3D position in front of the screen, using only a 2D webcam without the need for additional markers or equipment. The study presents a comprehensive comparative analysis, assessing the performance of the proposed method against established eye tracking solutions. This includes a direct comparison with the purpose-built Tobii Eye Tracker 5, a high-end hardware solution, and the webcam-based GazeRecorder software. In experiments replicating head movements, especially those imitating yaw rotations, the study brings to light the inherent difficulties associated with tracking such motions using 2D webcams. This research introduces a solution by integrating Structure from Motion (SfM) into the Convolutional Neural Network (CNN) model. The study’s accomplishments include showcasing the potential for accurate screen gaze tracking with a simple webcam, presenting a novel approach for physical distance computation, and proposing compensation for head movements, laying the groundwork for advancements in real-world gaze estimation scenarios.
{"title":"Webcam-based gaze estimation for computer screen interaction","authors":"Lucas Falch, K. Lohan","doi":"10.3389/frobt.2024.1369566","DOIUrl":"https://doi.org/10.3389/frobt.2024.1369566","url":null,"abstract":"This paper presents a novel webcam-based approach for gaze estimation on computer screens. Utilizing appearance based gaze estimation models, the system provides a method for mapping the gaze vector from the user’s perspective onto the computer screen. Notably, it determines the user’s 3D position in front of the screen, using only a 2D webcam without the need for additional markers or equipment. The study presents a comprehensive comparative analysis, assessing the performance of the proposed method against established eye tracking solutions. This includes a direct comparison with the purpose-built Tobii Eye Tracker 5, a high-end hardware solution, and the webcam-based GazeRecorder software. In experiments replicating head movements, especially those imitating yaw rotations, the study brings to light the inherent difficulties associated with tracking such motions using 2D webcams. This research introduces a solution by integrating Structure from Motion (SfM) into the Convolutional Neural Network (CNN) model. The study’s accomplishments include showcasing the potential for accurate screen gaze tracking with a simple webcam, presenting a novel approach for physical distance computation, and proposing compensation for head movements, laying the groundwork for advancements in real-world gaze estimation scenarios.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"91 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140754494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, virtual idols have garnered considerable attention because they can perform activities similar to real idols. However, as they are fictitious idols with nonphysical presence, they cannot perform physical interactions such as handshake. Combining a robotic hand with a display showing virtual idols is the one of the methods to solve this problem. Nonetheless a physical handshake is possible, the form of handshake that can effectively induce the desirable behavior is unclear. In this study, we adopted a robotic hand as an interface and aimed to imitate the behavior of real idols. To test the effects of this behavior, we conducted step-wise experiments. The series of experiments revealed that the handshake by the robotic hand increased the feeling of intimacy toward the virtual idol, and it became more enjoyable to respond to a request from the virtual idol. In addition, viewing the virtual idols during the handshake increased the feeling of intimacy with the virtual idol. Moreover, the method of the hand-shake peculiar to idols, which tried to keep holding the user’s hand after the conversation, increased the feeling of intimacy to the virtual idol.
{"title":"Embodied, visible, and courteous: exploring robotic social touch with virtual idols","authors":"Yuya Onishi, Kosuke Ogawa, Kazuaki Tanaka, Hideyuki Nakanishi","doi":"10.3389/frobt.2024.1240408","DOIUrl":"https://doi.org/10.3389/frobt.2024.1240408","url":null,"abstract":"In recent years, virtual idols have garnered considerable attention because they can perform activities similar to real idols. However, as they are fictitious idols with nonphysical presence, they cannot perform physical interactions such as handshake. Combining a robotic hand with a display showing virtual idols is the one of the methods to solve this problem. Nonetheless a physical handshake is possible, the form of handshake that can effectively induce the desirable behavior is unclear. In this study, we adopted a robotic hand as an interface and aimed to imitate the behavior of real idols. To test the effects of this behavior, we conducted step-wise experiments. The series of experiments revealed that the handshake by the robotic hand increased the feeling of intimacy toward the virtual idol, and it became more enjoyable to respond to a request from the virtual idol. In addition, viewing the virtual idols during the handshake increased the feeling of intimacy with the virtual idol. Moreover, the method of the hand-shake peculiar to idols, which tried to keep holding the user’s hand after the conversation, increased the feeling of intimacy to the virtual idol.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 76","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140384171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.3389/frobt.2024.1303279
José Saenz, T. Felsch, Christoph Walter, Tim König, Olaf Poenicke, Eric Bayrhammer, Mathias Vorbröcker, Dirk Berndt, N. Elkmann, Julia Arlinghaus
Automated disassembly is increasingly in focus for Recycling, Re-use, and Remanufacturing (Re-X) activities. Trends in digitalization, in particular digital twin (DT) technologies and the digital product passport, as well as recently proposed European legislation such as the Net Zero and the Critical materials Acts will accelerate digitalization of product documentation and factory processes. In this contribution we look beyond these activities by discussing digital information for stakeholders at the Re-X segment of the value-chain. Furthermore, we present an approach to automated product disassembly based on different levels of available product information. The challenges for automated disassembly and the subsequent requirements on modeling of disassembly processes and product states for electronic waste are examined. The authors use a top-down (e.g., review of existing standards and process definitions) methodology to define an initial data model for disassembly processes. An additional bottom-up approach, whereby 5 exemplary electronics products were manually disassembled, was employed to analyze the efficacy of the initial data model and to offer improvements. This paper reports on our suggested informal data models for automatic electronics disassembly and the associated robotic skills.
{"title":"Automated disassembly of e-waste—requirements on modeling of processes and product states","authors":"José Saenz, T. Felsch, Christoph Walter, Tim König, Olaf Poenicke, Eric Bayrhammer, Mathias Vorbröcker, Dirk Berndt, N. Elkmann, Julia Arlinghaus","doi":"10.3389/frobt.2024.1303279","DOIUrl":"https://doi.org/10.3389/frobt.2024.1303279","url":null,"abstract":"Automated disassembly is increasingly in focus for Recycling, Re-use, and Remanufacturing (Re-X) activities. Trends in digitalization, in particular digital twin (DT) technologies and the digital product passport, as well as recently proposed European legislation such as the Net Zero and the Critical materials Acts will accelerate digitalization of product documentation and factory processes. In this contribution we look beyond these activities by discussing digital information for stakeholders at the Re-X segment of the value-chain. Furthermore, we present an approach to automated product disassembly based on different levels of available product information. The challenges for automated disassembly and the subsequent requirements on modeling of disassembly processes and product states for electronic waste are examined. The authors use a top-down (e.g., review of existing standards and process definitions) methodology to define an initial data model for disassembly processes. An additional bottom-up approach, whereby 5 exemplary electronics products were manually disassembled, was employed to analyze the efficacy of the initial data model and to offer improvements. This paper reports on our suggested informal data models for automatic electronics disassembly and the associated robotic skills.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140217438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.3389/frobt.2024.1331347
Leigh Levinson, Jessica McKinney, Christena Nippert-Eng, Randy Gomez, Selma Šabanović
The targeted use of social robots for the family demands a better understanding of multiple stakeholders’ privacy concerns, including those of parents and children. Through a co-learning workshop which introduced families to the functions and hypothetical use of social robots in the home, we present preliminary evidence from 6 families that exhibits how parents and children have different comfort levels with robots collecting and sharing information across different use contexts. Conversations and booklet answers reveal that parents adopted their child’s decision in scenarios where they expect children to have more agency, such as in cases of homework completion or cleaning up toys, and when children proposed what their parents found to be acceptable reasoning for their decisions. Families expressed relief when they shared the same reasoning when coming to conclusive decisions, signifying an agreement of boundary management between the robot and the family. In cases where parents and children did not agree, they rejected a binary, either-or decision and opted for a third type of response, reflecting skepticism, uncertainty and/or compromise. Our work highlights the benefits of involving parents and children in child- and family-centered research, including parental abilities to provide cognitive scaffolding and personalize hypothetical scenarios for their children.
{"title":"Our business, not the robot’s: family conversations about privacy with social robots in the home","authors":"Leigh Levinson, Jessica McKinney, Christena Nippert-Eng, Randy Gomez, Selma Šabanović","doi":"10.3389/frobt.2024.1331347","DOIUrl":"https://doi.org/10.3389/frobt.2024.1331347","url":null,"abstract":"The targeted use of social robots for the family demands a better understanding of multiple stakeholders’ privacy concerns, including those of parents and children. Through a co-learning workshop which introduced families to the functions and hypothetical use of social robots in the home, we present preliminary evidence from 6 families that exhibits how parents and children have different comfort levels with robots collecting and sharing information across different use contexts. Conversations and booklet answers reveal that parents adopted their child’s decision in scenarios where they expect children to have more agency, such as in cases of homework completion or cleaning up toys, and when children proposed what their parents found to be acceptable reasoning for their decisions. Families expressed relief when they shared the same reasoning when coming to conclusive decisions, signifying an agreement of boundary management between the robot and the family. In cases where parents and children did not agree, they rejected a binary, either-or decision and opted for a third type of response, reflecting skepticism, uncertainty and/or compromise. Our work highlights the benefits of involving parents and children in child- and family-centered research, including parental abilities to provide cognitive scaffolding and personalize hypothetical scenarios for their children.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 75","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140221887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.3389/frobt.2024.1305615
Guillermo González-Mena, Octavio Lozada-Flores, Dione Murrieta Caballero, J. Noguez, David Escobar-Castillejos
Introduction: The teaching process plays a crucial role in the training of professionals. Traditional classroom-based teaching methods, while foundational, often struggle to effectively motivate students. The integration of interactive learning experiences, such as visuo-haptic simulators, presents an opportunity to enhance both student engagement and comprehension.Methods: In this study, three simulators were developed to explore the impact of visuo-haptic simulations on engineering students’ engagement and their perceptions of learning basic physics concepts. The study used an adapted end-user computing satisfaction questionnaire to assess students’ experiences and perceptions of the simulators’ usability and its utility in learning.Results: Feedback from participants suggests a positive reception towards the use of visuo-haptic simulators, highlighting their usefulness in improving the understanding of complex physics principles.Discussion: Results suggest that incorporating visuo-haptic simulations into educational contexts may offer significant benefits, particularly in STEM courses, where traditional methods may be limited. The positive responses from participants underscore the potential of computer simulations to innovate pedagogical strategies. Future research will focus on assessing the effectiveness of these simulators in enhancing students’ learning and understanding of these concepts in higher-education physics courses.
{"title":"Improving engineering students’ understanding of classical physics through visuo-haptic simulations","authors":"Guillermo González-Mena, Octavio Lozada-Flores, Dione Murrieta Caballero, J. Noguez, David Escobar-Castillejos","doi":"10.3389/frobt.2024.1305615","DOIUrl":"https://doi.org/10.3389/frobt.2024.1305615","url":null,"abstract":"Introduction: The teaching process plays a crucial role in the training of professionals. Traditional classroom-based teaching methods, while foundational, often struggle to effectively motivate students. The integration of interactive learning experiences, such as visuo-haptic simulators, presents an opportunity to enhance both student engagement and comprehension.Methods: In this study, three simulators were developed to explore the impact of visuo-haptic simulations on engineering students’ engagement and their perceptions of learning basic physics concepts. The study used an adapted end-user computing satisfaction questionnaire to assess students’ experiences and perceptions of the simulators’ usability and its utility in learning.Results: Feedback from participants suggests a positive reception towards the use of visuo-haptic simulators, highlighting their usefulness in improving the understanding of complex physics principles.Discussion: Results suggest that incorporating visuo-haptic simulations into educational contexts may offer significant benefits, particularly in STEM courses, where traditional methods may be limited. The positive responses from participants underscore the potential of computer simulations to innovate pedagogical strategies. Future research will focus on assessing the effectiveness of these simulators in enhancing students’ learning and understanding of these concepts in higher-education physics courses.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140221303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.3389/frobt.2024.1365632
S. Souipas, Anh Nguyen, Stephen Laws, Brian L. Davies, Ferdinando M. Rodriguez y Baena
Introduction: Collaborative robots, designed to work alongside humans for manipulating end-effectors, greatly benefit from the implementation of active constraints. This process comprises the definition of a boundary, followed by the enforcement of some control algorithm when the robot tooltip interacts with the generated boundary. Contact with the constraint boundary is communicated to the human operator through various potential forms of feedback. In fields like surgical robotics, where patient safety is paramount, implementing active constraints can prevent the robot from interacting with portions of the patient anatomy that shouldn’t be operated on. Despite improvements in orthopaedic surgical robots, however, there exists a gap between bulky systems with haptic feedback capabilities and miniaturised systems that only allow for boundary control, where interaction with the active constraint boundary interrupts robot functions. Generally, active constraint generation relies on optical tracking systems and preoperative imaging techniques.Methods: This paper presents a refined version of the Signature Robot, a three degrees-of-freedom, hands-on collaborative system for orthopaedic surgery. Additionally, it presents a method for generating and enforcing active constraints “on-the-fly” using our previously introduced monocular, RGB, camera-based network, SimPS-Net. The network was deployed in real-time for the purpose of boundary definition. This boundary was subsequently used for constraint enforcement testing. The robot was utilised to test two different active constraints: a safe region and a restricted region.Results: The network success rate, defined as the ratio of correct over total object localisation results, was calculated to be 54.7% ± 5.2%. In the safe region case, haptic feedback resisted tooltip manipulation beyond the active constraint boundary, with a mean distance from the boundary of 2.70 mm ± 0.37 mm and a mean exit duration of 0.76 s ± 0.11 s. For the restricted-zone constraint, the operator was successfully prevented from penetrating the boundary in 100% of attempts.Discussion: This paper showcases the viability of the proposed robotic platform and presents promising results of a versatile constraint generation and enforcement pipeline.
{"title":"Real-time active constraint generation and enforcement for surgical tools using 3D detection and localisation network","authors":"S. Souipas, Anh Nguyen, Stephen Laws, Brian L. Davies, Ferdinando M. Rodriguez y Baena","doi":"10.3389/frobt.2024.1365632","DOIUrl":"https://doi.org/10.3389/frobt.2024.1365632","url":null,"abstract":"Introduction: Collaborative robots, designed to work alongside humans for manipulating end-effectors, greatly benefit from the implementation of active constraints. This process comprises the definition of a boundary, followed by the enforcement of some control algorithm when the robot tooltip interacts with the generated boundary. Contact with the constraint boundary is communicated to the human operator through various potential forms of feedback. In fields like surgical robotics, where patient safety is paramount, implementing active constraints can prevent the robot from interacting with portions of the patient anatomy that shouldn’t be operated on. Despite improvements in orthopaedic surgical robots, however, there exists a gap between bulky systems with haptic feedback capabilities and miniaturised systems that only allow for boundary control, where interaction with the active constraint boundary interrupts robot functions. Generally, active constraint generation relies on optical tracking systems and preoperative imaging techniques.Methods: This paper presents a refined version of the Signature Robot, a three degrees-of-freedom, hands-on collaborative system for orthopaedic surgery. Additionally, it presents a method for generating and enforcing active constraints “on-the-fly” using our previously introduced monocular, RGB, camera-based network, SimPS-Net. The network was deployed in real-time for the purpose of boundary definition. This boundary was subsequently used for constraint enforcement testing. The robot was utilised to test two different active constraints: a safe region and a restricted region.Results: The network success rate, defined as the ratio of correct over total object localisation results, was calculated to be 54.7% ± 5.2%. In the safe region case, haptic feedback resisted tooltip manipulation beyond the active constraint boundary, with a mean distance from the boundary of 2.70 mm ± 0.37 mm and a mean exit duration of 0.76 s ± 0.11 s. For the restricted-zone constraint, the operator was successfully prevented from penetrating the boundary in 100% of attempts.Discussion: This paper showcases the viability of the proposed robotic platform and presents promising results of a versatile constraint generation and enforcement pipeline.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"42 32","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140231415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}