Improving Learning in Robotics Teleoperation: Contribution of Eye-tracking in Digital Ethnography

Philippe Fauquet-Alekhine, Julien Bleuze
{"title":"Improving Learning in Robotics Teleoperation: Contribution of Eye-tracking in Digital Ethnography","authors":"Philippe Fauquet-Alekhine, Julien Bleuze","doi":"10.9734/cjast/2024/v43i74412","DOIUrl":null,"url":null,"abstract":"Aims: Digital ethnography has shown its added-value for the analysis of work activity in order to improve the methods of developing the associated necessary competencies. In particular, tracing process methods based on first-person recording of the activity by first-person view camera combined with competencies-oriented and goal-oriented interviews have demonstrated their effectiveness in medicine, nuclear industry and education. However, the teleoperation of robots out of sight requires the use of a specific first-person view camera: an eye-tracking device. This is due to the fact that during the teleoperation, the pilot's head movements are almost non-existent while the eyes move enormously. Yet, the literature is completely void of this type of activity analysis using eye-tracking for teleoperation in robotics. The objective of the present article is to fill this gap by presenting a pilot study characterizing the potential contribution of first-person view tracing process combined with competencies-oriented and goal-oriented interviews for robotics teleoperation out of sight. \nStudy Design: The pilot study has involved two robot pilots individually performing a teleoperation task out of sight. The pilots have been chosen for their difference of experience in teleoperation. Whilst performing the activity, they have been equipped with an eye-tracking device, enabling the recording of the activity at the first-person perspective. An interview based on the Square of PErceived Action model (SPEAC model) has followed in order to access what makes their competencies. \nPlace and Duration of Study: The experiments were undertaken in the simulation training center of the Groupe INTRA-Intervention Robotique sur Accidents, in France, during 2023. \nMethodology: Two pilots had to individually teleoperate a robot from a control console using the videos transmitted on several screens placed in front of them from the cameras on board the robot. The activity consisted of moving the robot through a maze carrying a container in which a ring had to be put after being picked up from the ground, and then bring the whole out of the maze. The activity lasted about 10 to 20 minutes. Each pilot was equipped with an eye-tracking device that made it possible to record their activity for deferred access in order to identify the knowledge and know-how implemented. The interview was conducted using the SPEAC model. At the end of the interview, a matrix of competencies was built for each of the pilots. Software processing made it possible to access quantified data, in particular the vision fixation time for each of the pilots in order to take information from the screens and the control console. \nResults: The comparison of the matrices of competencies made it possible to measure the gap in competence between an experienced pilot and a novice pilot, as well as to identify knowledge and know-how not yet taught in pilot training. The measurement of fixation times has also made it possible to identify a difference that appears interesting to be analyzed in more depth in a future study. \nConclusion: Results shows that the method applied is well suited for teleoperation of robots out of sight and provide relevant data to improve training.","PeriodicalId":505676,"journal":{"name":"Current Journal of Applied Science and Technology","volume":"81 19","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current Journal of Applied Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.9734/cjast/2024/v43i74412","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Aims: Digital ethnography has shown its added-value for the analysis of work activity in order to improve the methods of developing the associated necessary competencies. In particular, tracing process methods based on first-person recording of the activity by first-person view camera combined with competencies-oriented and goal-oriented interviews have demonstrated their effectiveness in medicine, nuclear industry and education. However, the teleoperation of robots out of sight requires the use of a specific first-person view camera: an eye-tracking device. This is due to the fact that during the teleoperation, the pilot's head movements are almost non-existent while the eyes move enormously. Yet, the literature is completely void of this type of activity analysis using eye-tracking for teleoperation in robotics. The objective of the present article is to fill this gap by presenting a pilot study characterizing the potential contribution of first-person view tracing process combined with competencies-oriented and goal-oriented interviews for robotics teleoperation out of sight. Study Design: The pilot study has involved two robot pilots individually performing a teleoperation task out of sight. The pilots have been chosen for their difference of experience in teleoperation. Whilst performing the activity, they have been equipped with an eye-tracking device, enabling the recording of the activity at the first-person perspective. An interview based on the Square of PErceived Action model (SPEAC model) has followed in order to access what makes their competencies. Place and Duration of Study: The experiments were undertaken in the simulation training center of the Groupe INTRA-Intervention Robotique sur Accidents, in France, during 2023. Methodology: Two pilots had to individually teleoperate a robot from a control console using the videos transmitted on several screens placed in front of them from the cameras on board the robot. The activity consisted of moving the robot through a maze carrying a container in which a ring had to be put after being picked up from the ground, and then bring the whole out of the maze. The activity lasted about 10 to 20 minutes. Each pilot was equipped with an eye-tracking device that made it possible to record their activity for deferred access in order to identify the knowledge and know-how implemented. The interview was conducted using the SPEAC model. At the end of the interview, a matrix of competencies was built for each of the pilots. Software processing made it possible to access quantified data, in particular the vision fixation time for each of the pilots in order to take information from the screens and the control console. Results: The comparison of the matrices of competencies made it possible to measure the gap in competence between an experienced pilot and a novice pilot, as well as to identify knowledge and know-how not yet taught in pilot training. The measurement of fixation times has also made it possible to identify a difference that appears interesting to be analyzed in more depth in a future study. Conclusion: Results shows that the method applied is well suited for teleoperation of robots out of sight and provide relevant data to improve training.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
改进机器人远程操作中的学习:眼动跟踪在数字民族志中的贡献
目的:数字人种学在分析工作活动以改进培养相关必要能力的方法方面显示出其附加价值。特别是,基于第一人称视角摄像机对活动的第一人称记录,结合以能力为导向和以目标为导向的访谈的过程追踪方法,已在医学、核工业和教育领域显示出其有效性。不过,在视线范围外远程操控机器人需要使用一种特殊的第一人称视角摄像机:眼球跟踪装置。这是因为在远程操作过程中,驾驶员的头部几乎不会移动,而眼睛却会大量移动。然而,利用眼动跟踪技术对机器人遥控操作进行活动分析的文献却完全空白。本文旨在填补这一空白,通过介绍一项试点研究,说明第一人称视线跟踪过程与能力导向和目标导向访谈相结合,对机器人视线外远程操作的潜在贡献。研究设计:试点研究涉及两名机器人驾驶员单独执行视线外远程操作任务。之所以选择这两名飞行员,是因为他们在远程操作方面具有不同的经验。在执行任务时,他们都配备了眼动跟踪装置,以便以第一人称视角记录活动过程。随后,根据 "PErceived Action Square "模型(SPEAC 模型)进行了访谈,以了解他们的能力。研究地点和时间:实验于 2023 年在法国的事故干预机器人集团 INTRA 的模拟培训中心进行。实验方法:两名飞行员必须通过控制台,利用机器人上的摄像头在他们面前的几个屏幕上传输的视频,单独遥控机器人。活动包括移动机器人穿过一个迷宫,将一个容器从地面拾起后放入一个环,然后将整个容器带出迷宫。活动持续约 10 至 20 分钟。每个试验者都配备了眼动跟踪装置,可以记录他们的活动,以便延迟访问,从而确定所实施的知识和诀窍。访谈采用 SPEAC 模式进行。访谈结束后,为每位飞行员建立了能力矩阵。通过软件处理,可以获取量化数据,特别是每位飞行员的视觉固定时间,以便从屏幕和控制台获取信息。结果:通过对能力矩阵的比较,可以测量出经验丰富的飞行员和新手飞行员之间的能力差距,并确定飞行员培训中尚未教授的知识和诀窍。通过对固定时间的测量,还发现了一个似乎值得在今后的研究中进行更深入分析的差异。结论结果表明,所采用的方法非常适合视线外机器人的远程操作,并为改进培训提供了相关数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Mechanical Properties and Engineering Applications of Flexural and Tensile Properties in Materials Science A Study on Evolution of Optical Fiber Communication from PDH to SDM Technology Study of the Performance of an Indirect Forced Convection Solar Dryer Incorporating a Thermal Energy Storage Device on a Granite Bed for Drying Tomatoes Improving Learning in Robotics Teleoperation: Contribution of Eye-tracking in Digital Ethnography Study of Regular Variations in Vertical Total Electron Content (VTEC) from 2013 to 2021 at Station BF01 in Ouagadougou
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1